00:00:00.000 Started by upstream project "autotest-nightly" build number 4343 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3706 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.066 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.067 The recommended git tool is: git 00:00:00.067 using credential 00000000-0000-0000-0000-000000000002 00:00:00.069 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.112 Fetching changes from the remote Git repository 00:00:00.117 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.263 > git --version # 'git version 2.39.2' 00:00:00.263 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.307 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.307 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.199 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.211 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.223 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.223 > git config core.sparsecheckout # timeout=10 00:00:04.233 > git read-tree -mu HEAD # timeout=10 00:00:04.247 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.271 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.272 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.364 [Pipeline] Start of Pipeline 00:00:04.378 [Pipeline] library 00:00:04.379 Loading library shm_lib@master 00:00:04.379 Library shm_lib@master is cached. Copying from home. 00:00:04.394 [Pipeline] node 00:00:04.403 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.404 [Pipeline] { 00:00:04.414 [Pipeline] catchError 00:00:04.416 [Pipeline] { 00:00:04.426 [Pipeline] wrap 00:00:04.433 [Pipeline] { 00:00:04.439 [Pipeline] stage 00:00:04.440 [Pipeline] { (Prologue) 00:00:04.635 [Pipeline] sh 00:00:04.929 + logger -p user.info -t JENKINS-CI 00:00:04.950 [Pipeline] echo 00:00:04.952 Node: CYP12 00:00:04.960 [Pipeline] sh 00:00:05.272 [Pipeline] setCustomBuildProperty 00:00:05.286 [Pipeline] echo 00:00:05.288 Cleanup processes 00:00:05.296 [Pipeline] sh 00:00:05.615 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.615 2154483 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.630 [Pipeline] sh 00:00:05.920 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.920 ++ grep -v 'sudo pgrep' 00:00:05.920 ++ awk '{print $1}' 00:00:05.920 + sudo kill -9 00:00:05.920 + true 00:00:05.943 [Pipeline] cleanWs 00:00:05.952 [WS-CLEANUP] Deleting project workspace... 00:00:05.952 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.958 [WS-CLEANUP] done 00:00:05.962 [Pipeline] setCustomBuildProperty 00:00:05.974 [Pipeline] sh 00:00:06.258 + sudo git config --global --replace-all safe.directory '*' 00:00:06.355 [Pipeline] httpRequest 00:00:06.730 [Pipeline] echo 00:00:06.732 Sorcerer 10.211.164.20 is alive 00:00:06.740 [Pipeline] retry 00:00:06.742 [Pipeline] { 00:00:06.753 [Pipeline] httpRequest 00:00:06.758 HttpMethod: GET 00:00:06.758 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.759 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.763 Response Code: HTTP/1.1 200 OK 00:00:06.763 Success: Status code 200 is in the accepted range: 200,404 00:00:06.764 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.232 [Pipeline] } 00:00:07.249 [Pipeline] // retry 00:00:07.257 [Pipeline] sh 00:00:07.542 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.555 [Pipeline] httpRequest 00:00:08.464 [Pipeline] echo 00:00:08.465 Sorcerer 10.211.164.20 is alive 00:00:08.473 [Pipeline] retry 00:00:08.475 [Pipeline] { 00:00:08.487 [Pipeline] httpRequest 00:00:08.491 HttpMethod: GET 00:00:08.491 URL: http://10.211.164.20/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:08.493 Sending request to url: http://10.211.164.20/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:08.516 Response Code: HTTP/1.1 200 OK 00:00:08.517 Success: Status code 200 is in the accepted range: 200,404 00:00:08.517 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:53.729 [Pipeline] } 00:00:53.749 [Pipeline] // retry 00:00:53.756 [Pipeline] sh 00:00:54.045 + tar --no-same-owner -xf spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:57.352 [Pipeline] sh 00:00:57.656 + git -C spdk log --oneline -n5 00:00:57.656 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:00:57.656 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:00:57.656 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:00:57.656 0ea9ac02f accel/mlx5: Create pool of UMRs 00:00:57.656 60adca7e1 lib/mlx5: API to configure UMR 00:00:57.668 [Pipeline] } 00:00:57.680 [Pipeline] // stage 00:00:57.689 [Pipeline] stage 00:00:57.691 [Pipeline] { (Prepare) 00:00:57.706 [Pipeline] writeFile 00:00:57.721 [Pipeline] sh 00:00:58.008 + logger -p user.info -t JENKINS-CI 00:00:58.021 [Pipeline] sh 00:00:58.309 + logger -p user.info -t JENKINS-CI 00:00:58.322 [Pipeline] sh 00:00:58.609 + cat autorun-spdk.conf 00:00:58.609 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.609 SPDK_TEST_NVMF=1 00:00:58.609 SPDK_TEST_NVME_CLI=1 00:00:58.609 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.609 SPDK_TEST_NVMF_NICS=e810 00:00:58.609 SPDK_RUN_ASAN=1 00:00:58.609 SPDK_RUN_UBSAN=1 00:00:58.609 NET_TYPE=phy 00:00:58.618 RUN_NIGHTLY=1 00:00:58.623 [Pipeline] readFile 00:00:58.670 [Pipeline] withEnv 00:00:58.673 [Pipeline] { 00:00:58.686 [Pipeline] sh 00:00:58.978 + set -ex 00:00:58.978 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:58.978 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:58.978 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.978 ++ SPDK_TEST_NVMF=1 00:00:58.978 ++ SPDK_TEST_NVME_CLI=1 00:00:58.978 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.978 ++ SPDK_TEST_NVMF_NICS=e810 00:00:58.978 ++ SPDK_RUN_ASAN=1 00:00:58.978 ++ SPDK_RUN_UBSAN=1 00:00:58.978 ++ NET_TYPE=phy 00:00:58.978 ++ RUN_NIGHTLY=1 00:00:58.978 + case $SPDK_TEST_NVMF_NICS in 00:00:58.978 + DRIVERS=ice 00:00:58.978 + [[ tcp == \r\d\m\a ]] 00:00:58.978 + [[ -n ice ]] 00:00:58.978 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:58.978 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:58.978 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:58.978 rmmod: ERROR: Module irdma is not currently loaded 00:00:58.978 rmmod: ERROR: Module i40iw is not currently loaded 00:00:58.978 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:58.978 + true 00:00:58.978 + for D in $DRIVERS 00:00:58.978 + sudo modprobe ice 00:00:58.978 + exit 0 00:00:58.989 [Pipeline] } 00:00:59.004 [Pipeline] // withEnv 00:00:59.009 [Pipeline] } 00:00:59.023 [Pipeline] // stage 00:00:59.036 [Pipeline] catchError 00:00:59.039 [Pipeline] { 00:00:59.056 [Pipeline] timeout 00:00:59.057 Timeout set to expire in 1 hr 0 min 00:00:59.059 [Pipeline] { 00:00:59.074 [Pipeline] stage 00:00:59.076 [Pipeline] { (Tests) 00:00:59.091 [Pipeline] sh 00:00:59.381 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.381 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.381 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.381 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:59.381 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.381 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.381 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:59.381 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.381 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.381 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.381 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:59.381 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.381 + source /etc/os-release 00:00:59.381 ++ NAME='Fedora Linux' 00:00:59.381 ++ VERSION='39 (Cloud Edition)' 00:00:59.381 ++ ID=fedora 00:00:59.381 ++ VERSION_ID=39 00:00:59.381 ++ VERSION_CODENAME= 00:00:59.381 ++ PLATFORM_ID=platform:f39 00:00:59.381 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:59.381 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:59.381 ++ LOGO=fedora-logo-icon 00:00:59.381 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:59.381 ++ HOME_URL=https://fedoraproject.org/ 00:00:59.381 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:59.381 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:59.381 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:59.381 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:59.381 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:59.381 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:59.381 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:59.381 ++ SUPPORT_END=2024-11-12 00:00:59.381 ++ VARIANT='Cloud Edition' 00:00:59.381 ++ VARIANT_ID=cloud 00:00:59.381 + uname -a 00:00:59.381 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:59.381 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:01.925 Hugepages 00:01:01.925 node hugesize free / total 00:01:01.925 node0 1048576kB 0 / 0 00:01:01.925 node0 2048kB 0 / 0 00:01:01.925 node1 1048576kB 0 / 0 00:01:01.925 node1 2048kB 0 / 0 00:01:01.925 00:01:01.925 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:01.925 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:01.925 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:01.925 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:01.925 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:01.925 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:01.925 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:01.925 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:01.925 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:02.184 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:02.184 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:02.184 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:02.184 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:02.184 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:02.184 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:02.184 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:02.184 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:02.184 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:02.184 + rm -f /tmp/spdk-ld-path 00:01:02.184 + source autorun-spdk.conf 00:01:02.184 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.184 ++ SPDK_TEST_NVMF=1 00:01:02.184 ++ SPDK_TEST_NVME_CLI=1 00:01:02.184 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.184 ++ SPDK_TEST_NVMF_NICS=e810 00:01:02.184 ++ SPDK_RUN_ASAN=1 00:01:02.184 ++ SPDK_RUN_UBSAN=1 00:01:02.184 ++ NET_TYPE=phy 00:01:02.184 ++ RUN_NIGHTLY=1 00:01:02.184 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:02.184 + [[ -n '' ]] 00:01:02.184 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.184 + for M in /var/spdk/build-*-manifest.txt 00:01:02.184 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:02.184 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.184 + for M in /var/spdk/build-*-manifest.txt 00:01:02.184 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:02.184 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.184 + for M in /var/spdk/build-*-manifest.txt 00:01:02.184 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:02.184 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.184 ++ uname 00:01:02.184 + [[ Linux == \L\i\n\u\x ]] 00:01:02.184 + sudo dmesg -T 00:01:02.185 + sudo dmesg --clear 00:01:02.445 + dmesg_pid=2155929 00:01:02.445 + [[ Fedora Linux == FreeBSD ]] 00:01:02.445 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.445 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.445 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:02.445 + [[ -x /usr/src/fio-static/fio ]] 00:01:02.445 + export FIO_BIN=/usr/src/fio-static/fio 00:01:02.445 + FIO_BIN=/usr/src/fio-static/fio 00:01:02.445 + sudo dmesg -Tw 00:01:02.445 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:02.445 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:02.445 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:02.445 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:02.445 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:02.445 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:02.445 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:02.445 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:02.445 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.445 11:13:01 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:02.445 11:13:01 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.445 11:13:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.445 11:13:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:02.445 11:13:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:02.445 11:13:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.445 11:13:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:02.445 11:13:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:02.445 11:13:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:02.445 11:13:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:02.445 11:13:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:02.445 11:13:01 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:02.446 11:13:01 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.446 11:13:01 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:02.446 11:13:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:02.446 11:13:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:02.446 11:13:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:02.446 11:13:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:02.446 11:13:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:02.446 11:13:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.446 11:13:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.446 11:13:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.446 11:13:01 -- paths/export.sh@5 -- $ export PATH 00:01:02.446 11:13:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.446 11:13:01 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:02.446 11:13:01 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:02.446 11:13:01 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733566381.XXXXXX 00:01:02.446 11:13:01 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733566381.5NakvT 00:01:02.446 11:13:01 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:02.446 11:13:01 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:02.446 11:13:01 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:02.446 11:13:01 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:02.446 11:13:01 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:02.446 11:13:01 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:02.446 11:13:01 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:02.446 11:13:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.446 11:13:01 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:02.446 11:13:01 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:02.446 11:13:01 -- pm/common@17 -- $ local monitor 00:01:02.446 11:13:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.446 11:13:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.446 11:13:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.446 11:13:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.446 11:13:01 -- pm/common@21 -- $ date +%s 00:01:02.446 11:13:01 -- pm/common@25 -- $ sleep 1 00:01:02.446 11:13:01 -- pm/common@21 -- $ date +%s 00:01:02.446 11:13:01 -- pm/common@21 -- $ date +%s 00:01:02.446 11:13:01 -- pm/common@21 -- $ date +%s 00:01:02.446 11:13:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733566381 00:01:02.446 11:13:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733566381 00:01:02.446 11:13:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733566381 00:01:02.446 11:13:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733566381 00:01:02.446 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733566381_collect-cpu-load.pm.log 00:01:02.446 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733566381_collect-vmstat.pm.log 00:01:02.446 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733566381_collect-cpu-temp.pm.log 00:01:02.707 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733566381_collect-bmc-pm.bmc.pm.log 00:01:03.654 11:13:02 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:03.654 11:13:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:03.654 11:13:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:03.654 11:13:02 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.654 11:13:02 -- spdk/autobuild.sh@16 -- $ date -u 00:01:03.654 Sat Dec 7 10:13:02 AM UTC 2024 00:01:03.654 11:13:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:03.654 v25.01-pre-311-ga2f5e1c2d 00:01:03.654 11:13:02 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:03.654 11:13:02 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:03.654 11:13:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:03.654 11:13:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:03.654 11:13:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.654 ************************************ 00:01:03.654 START TEST asan 00:01:03.654 ************************************ 00:01:03.654 11:13:02 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:03.654 using asan 00:01:03.654 00:01:03.654 real 0m0.001s 00:01:03.654 user 0m0.000s 00:01:03.654 sys 0m0.000s 00:01:03.654 11:13:02 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:03.654 11:13:02 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:03.654 ************************************ 00:01:03.654 END TEST asan 00:01:03.654 ************************************ 00:01:03.654 11:13:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:03.654 11:13:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:03.654 11:13:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:03.654 11:13:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:03.654 11:13:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.654 ************************************ 00:01:03.654 START TEST ubsan 00:01:03.654 ************************************ 00:01:03.654 11:13:02 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:03.654 using ubsan 00:01:03.654 00:01:03.654 real 0m0.001s 00:01:03.654 user 0m0.001s 00:01:03.654 sys 0m0.000s 00:01:03.654 11:13:02 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:03.654 11:13:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:03.654 ************************************ 00:01:03.654 END TEST ubsan 00:01:03.654 ************************************ 00:01:03.654 11:13:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:03.654 11:13:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:03.654 11:13:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:03.654 11:13:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:03.654 11:13:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:03.654 11:13:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:03.654 11:13:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:03.654 11:13:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:03.654 11:13:02 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:03.915 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:03.915 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:04.176 Using 'verbs' RDMA provider 00:01:20.028 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:32.263 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:32.263 Creating mk/config.mk...done. 00:01:32.263 Creating mk/cc.flags.mk...done. 00:01:32.263 Type 'make' to build. 00:01:32.263 11:13:31 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:32.263 11:13:31 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:32.263 11:13:31 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:32.263 11:13:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.263 ************************************ 00:01:32.263 START TEST make 00:01:32.263 ************************************ 00:01:32.263 11:13:31 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:32.263 make[1]: Nothing to be done for 'all'. 00:01:42.269 The Meson build system 00:01:42.269 Version: 1.5.0 00:01:42.269 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:42.269 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:42.269 Build type: native build 00:01:42.269 Program cat found: YES (/usr/bin/cat) 00:01:42.269 Project name: DPDK 00:01:42.269 Project version: 24.03.0 00:01:42.269 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:42.269 C linker for the host machine: cc ld.bfd 2.40-14 00:01:42.269 Host machine cpu family: x86_64 00:01:42.269 Host machine cpu: x86_64 00:01:42.269 Message: ## Building in Developer Mode ## 00:01:42.269 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:42.269 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:42.269 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:42.269 Program python3 found: YES (/usr/bin/python3) 00:01:42.269 Program cat found: YES (/usr/bin/cat) 00:01:42.269 Compiler for C supports arguments -march=native: YES 00:01:42.269 Checking for size of "void *" : 8 00:01:42.269 Checking for size of "void *" : 8 (cached) 00:01:42.269 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:42.269 Library m found: YES 00:01:42.269 Library numa found: YES 00:01:42.269 Has header "numaif.h" : YES 00:01:42.269 Library fdt found: NO 00:01:42.269 Library execinfo found: NO 00:01:42.269 Has header "execinfo.h" : YES 00:01:42.269 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:42.269 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:42.269 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:42.269 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:42.269 Run-time dependency openssl found: YES 3.1.1 00:01:42.269 Run-time dependency libpcap found: YES 1.10.4 00:01:42.269 Has header "pcap.h" with dependency libpcap: YES 00:01:42.269 Compiler for C supports arguments -Wcast-qual: YES 00:01:42.269 Compiler for C supports arguments -Wdeprecated: YES 00:01:42.269 Compiler for C supports arguments -Wformat: YES 00:01:42.270 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:42.270 Compiler for C supports arguments -Wformat-security: NO 00:01:42.270 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:42.270 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:42.270 Compiler for C supports arguments -Wnested-externs: YES 00:01:42.270 Compiler for C supports arguments -Wold-style-definition: YES 00:01:42.270 Compiler for C supports arguments -Wpointer-arith: YES 00:01:42.270 Compiler for C supports arguments -Wsign-compare: YES 00:01:42.270 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:42.270 Compiler for C supports arguments -Wundef: YES 00:01:42.270 Compiler for C supports arguments -Wwrite-strings: YES 00:01:42.270 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:42.270 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:42.270 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:42.270 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:42.270 Program objdump found: YES (/usr/bin/objdump) 00:01:42.270 Compiler for C supports arguments -mavx512f: YES 00:01:42.270 Checking if "AVX512 checking" compiles: YES 00:01:42.270 Fetching value of define "__SSE4_2__" : 1 00:01:42.270 Fetching value of define "__AES__" : 1 00:01:42.270 Fetching value of define "__AVX__" : 1 00:01:42.270 Fetching value of define "__AVX2__" : 1 00:01:42.270 Fetching value of define "__AVX512BW__" : 1 00:01:42.270 Fetching value of define "__AVX512CD__" : 1 00:01:42.270 Fetching value of define "__AVX512DQ__" : 1 00:01:42.270 Fetching value of define "__AVX512F__" : 1 00:01:42.270 Fetching value of define "__AVX512VL__" : 1 00:01:42.270 Fetching value of define "__PCLMUL__" : 1 00:01:42.270 Fetching value of define "__RDRND__" : 1 00:01:42.270 Fetching value of define "__RDSEED__" : 1 00:01:42.270 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:42.270 Fetching value of define "__znver1__" : (undefined) 00:01:42.270 Fetching value of define "__znver2__" : (undefined) 00:01:42.270 Fetching value of define "__znver3__" : (undefined) 00:01:42.270 Fetching value of define "__znver4__" : (undefined) 00:01:42.270 Library asan found: YES 00:01:42.270 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:42.270 Message: lib/log: Defining dependency "log" 00:01:42.270 Message: lib/kvargs: Defining dependency "kvargs" 00:01:42.270 Message: lib/telemetry: Defining dependency "telemetry" 00:01:42.270 Library rt found: YES 00:01:42.270 Checking for function "getentropy" : NO 00:01:42.270 Message: lib/eal: Defining dependency "eal" 00:01:42.270 Message: lib/ring: Defining dependency "ring" 00:01:42.270 Message: lib/rcu: Defining dependency "rcu" 00:01:42.270 Message: lib/mempool: Defining dependency "mempool" 00:01:42.270 Message: lib/mbuf: Defining dependency "mbuf" 00:01:42.270 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:42.270 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:42.270 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:42.270 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:42.270 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:42.270 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:42.270 Compiler for C supports arguments -mpclmul: YES 00:01:42.270 Compiler for C supports arguments -maes: YES 00:01:42.270 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:42.270 Compiler for C supports arguments -mavx512bw: YES 00:01:42.270 Compiler for C supports arguments -mavx512dq: YES 00:01:42.270 Compiler for C supports arguments -mavx512vl: YES 00:01:42.270 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:42.270 Compiler for C supports arguments -mavx2: YES 00:01:42.270 Compiler for C supports arguments -mavx: YES 00:01:42.270 Message: lib/net: Defining dependency "net" 00:01:42.270 Message: lib/meter: Defining dependency "meter" 00:01:42.270 Message: lib/ethdev: Defining dependency "ethdev" 00:01:42.270 Message: lib/pci: Defining dependency "pci" 00:01:42.270 Message: lib/cmdline: Defining dependency "cmdline" 00:01:42.270 Message: lib/hash: Defining dependency "hash" 00:01:42.270 Message: lib/timer: Defining dependency "timer" 00:01:42.270 Message: lib/compressdev: Defining dependency "compressdev" 00:01:42.270 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:42.270 Message: lib/dmadev: Defining dependency "dmadev" 00:01:42.270 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:42.270 Message: lib/power: Defining dependency "power" 00:01:42.270 Message: lib/reorder: Defining dependency "reorder" 00:01:42.270 Message: lib/security: Defining dependency "security" 00:01:42.270 Has header "linux/userfaultfd.h" : YES 00:01:42.270 Has header "linux/vduse.h" : YES 00:01:42.270 Message: lib/vhost: Defining dependency "vhost" 00:01:42.270 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:42.270 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:42.270 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:42.270 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:42.270 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:42.270 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:42.270 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:42.270 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:42.270 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:42.270 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:42.270 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:42.270 Configuring doxy-api-html.conf using configuration 00:01:42.270 Configuring doxy-api-man.conf using configuration 00:01:42.270 Program mandb found: YES (/usr/bin/mandb) 00:01:42.270 Program sphinx-build found: NO 00:01:42.270 Configuring rte_build_config.h using configuration 00:01:42.270 Message: 00:01:42.270 ================= 00:01:42.270 Applications Enabled 00:01:42.270 ================= 00:01:42.270 00:01:42.270 apps: 00:01:42.270 00:01:42.270 00:01:42.270 Message: 00:01:42.270 ================= 00:01:42.270 Libraries Enabled 00:01:42.270 ================= 00:01:42.270 00:01:42.270 libs: 00:01:42.270 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:42.270 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:42.270 cryptodev, dmadev, power, reorder, security, vhost, 00:01:42.270 00:01:42.270 Message: 00:01:42.270 =============== 00:01:42.270 Drivers Enabled 00:01:42.270 =============== 00:01:42.270 00:01:42.270 common: 00:01:42.270 00:01:42.270 bus: 00:01:42.270 pci, vdev, 00:01:42.270 mempool: 00:01:42.270 ring, 00:01:42.270 dma: 00:01:42.270 00:01:42.270 net: 00:01:42.270 00:01:42.270 crypto: 00:01:42.270 00:01:42.270 compress: 00:01:42.270 00:01:42.270 vdpa: 00:01:42.270 00:01:42.270 00:01:42.270 Message: 00:01:42.270 ================= 00:01:42.270 Content Skipped 00:01:42.270 ================= 00:01:42.270 00:01:42.270 apps: 00:01:42.270 dumpcap: explicitly disabled via build config 00:01:42.270 graph: explicitly disabled via build config 00:01:42.270 pdump: explicitly disabled via build config 00:01:42.270 proc-info: explicitly disabled via build config 00:01:42.270 test-acl: explicitly disabled via build config 00:01:42.270 test-bbdev: explicitly disabled via build config 00:01:42.270 test-cmdline: explicitly disabled via build config 00:01:42.270 test-compress-perf: explicitly disabled via build config 00:01:42.270 test-crypto-perf: explicitly disabled via build config 00:01:42.270 test-dma-perf: explicitly disabled via build config 00:01:42.270 test-eventdev: explicitly disabled via build config 00:01:42.270 test-fib: explicitly disabled via build config 00:01:42.270 test-flow-perf: explicitly disabled via build config 00:01:42.270 test-gpudev: explicitly disabled via build config 00:01:42.270 test-mldev: explicitly disabled via build config 00:01:42.270 test-pipeline: explicitly disabled via build config 00:01:42.270 test-pmd: explicitly disabled via build config 00:01:42.270 test-regex: explicitly disabled via build config 00:01:42.270 test-sad: explicitly disabled via build config 00:01:42.270 test-security-perf: explicitly disabled via build config 00:01:42.270 00:01:42.270 libs: 00:01:42.270 argparse: explicitly disabled via build config 00:01:42.270 metrics: explicitly disabled via build config 00:01:42.270 acl: explicitly disabled via build config 00:01:42.270 bbdev: explicitly disabled via build config 00:01:42.270 bitratestats: explicitly disabled via build config 00:01:42.270 bpf: explicitly disabled via build config 00:01:42.270 cfgfile: explicitly disabled via build config 00:01:42.270 distributor: explicitly disabled via build config 00:01:42.270 efd: explicitly disabled via build config 00:01:42.270 eventdev: explicitly disabled via build config 00:01:42.270 dispatcher: explicitly disabled via build config 00:01:42.270 gpudev: explicitly disabled via build config 00:01:42.270 gro: explicitly disabled via build config 00:01:42.270 gso: explicitly disabled via build config 00:01:42.270 ip_frag: explicitly disabled via build config 00:01:42.270 jobstats: explicitly disabled via build config 00:01:42.270 latencystats: explicitly disabled via build config 00:01:42.270 lpm: explicitly disabled via build config 00:01:42.270 member: explicitly disabled via build config 00:01:42.270 pcapng: explicitly disabled via build config 00:01:42.270 rawdev: explicitly disabled via build config 00:01:42.270 regexdev: explicitly disabled via build config 00:01:42.270 mldev: explicitly disabled via build config 00:01:42.270 rib: explicitly disabled via build config 00:01:42.270 sched: explicitly disabled via build config 00:01:42.270 stack: explicitly disabled via build config 00:01:42.270 ipsec: explicitly disabled via build config 00:01:42.270 pdcp: explicitly disabled via build config 00:01:42.270 fib: explicitly disabled via build config 00:01:42.270 port: explicitly disabled via build config 00:01:42.270 pdump: explicitly disabled via build config 00:01:42.270 table: explicitly disabled via build config 00:01:42.270 pipeline: explicitly disabled via build config 00:01:42.271 graph: explicitly disabled via build config 00:01:42.271 node: explicitly disabled via build config 00:01:42.271 00:01:42.271 drivers: 00:01:42.271 common/cpt: not in enabled drivers build config 00:01:42.271 common/dpaax: not in enabled drivers build config 00:01:42.271 common/iavf: not in enabled drivers build config 00:01:42.271 common/idpf: not in enabled drivers build config 00:01:42.271 common/ionic: not in enabled drivers build config 00:01:42.271 common/mvep: not in enabled drivers build config 00:01:42.271 common/octeontx: not in enabled drivers build config 00:01:42.271 bus/auxiliary: not in enabled drivers build config 00:01:42.271 bus/cdx: not in enabled drivers build config 00:01:42.271 bus/dpaa: not in enabled drivers build config 00:01:42.271 bus/fslmc: not in enabled drivers build config 00:01:42.271 bus/ifpga: not in enabled drivers build config 00:01:42.271 bus/platform: not in enabled drivers build config 00:01:42.271 bus/uacce: not in enabled drivers build config 00:01:42.271 bus/vmbus: not in enabled drivers build config 00:01:42.271 common/cnxk: not in enabled drivers build config 00:01:42.271 common/mlx5: not in enabled drivers build config 00:01:42.271 common/nfp: not in enabled drivers build config 00:01:42.271 common/nitrox: not in enabled drivers build config 00:01:42.271 common/qat: not in enabled drivers build config 00:01:42.271 common/sfc_efx: not in enabled drivers build config 00:01:42.271 mempool/bucket: not in enabled drivers build config 00:01:42.271 mempool/cnxk: not in enabled drivers build config 00:01:42.271 mempool/dpaa: not in enabled drivers build config 00:01:42.271 mempool/dpaa2: not in enabled drivers build config 00:01:42.271 mempool/octeontx: not in enabled drivers build config 00:01:42.271 mempool/stack: not in enabled drivers build config 00:01:42.271 dma/cnxk: not in enabled drivers build config 00:01:42.271 dma/dpaa: not in enabled drivers build config 00:01:42.271 dma/dpaa2: not in enabled drivers build config 00:01:42.271 dma/hisilicon: not in enabled drivers build config 00:01:42.271 dma/idxd: not in enabled drivers build config 00:01:42.271 dma/ioat: not in enabled drivers build config 00:01:42.271 dma/skeleton: not in enabled drivers build config 00:01:42.271 net/af_packet: not in enabled drivers build config 00:01:42.271 net/af_xdp: not in enabled drivers build config 00:01:42.271 net/ark: not in enabled drivers build config 00:01:42.271 net/atlantic: not in enabled drivers build config 00:01:42.271 net/avp: not in enabled drivers build config 00:01:42.271 net/axgbe: not in enabled drivers build config 00:01:42.271 net/bnx2x: not in enabled drivers build config 00:01:42.271 net/bnxt: not in enabled drivers build config 00:01:42.271 net/bonding: not in enabled drivers build config 00:01:42.271 net/cnxk: not in enabled drivers build config 00:01:42.271 net/cpfl: not in enabled drivers build config 00:01:42.271 net/cxgbe: not in enabled drivers build config 00:01:42.271 net/dpaa: not in enabled drivers build config 00:01:42.271 net/dpaa2: not in enabled drivers build config 00:01:42.271 net/e1000: not in enabled drivers build config 00:01:42.271 net/ena: not in enabled drivers build config 00:01:42.271 net/enetc: not in enabled drivers build config 00:01:42.271 net/enetfec: not in enabled drivers build config 00:01:42.271 net/enic: not in enabled drivers build config 00:01:42.271 net/failsafe: not in enabled drivers build config 00:01:42.271 net/fm10k: not in enabled drivers build config 00:01:42.271 net/gve: not in enabled drivers build config 00:01:42.271 net/hinic: not in enabled drivers build config 00:01:42.271 net/hns3: not in enabled drivers build config 00:01:42.271 net/i40e: not in enabled drivers build config 00:01:42.271 net/iavf: not in enabled drivers build config 00:01:42.271 net/ice: not in enabled drivers build config 00:01:42.271 net/idpf: not in enabled drivers build config 00:01:42.271 net/igc: not in enabled drivers build config 00:01:42.271 net/ionic: not in enabled drivers build config 00:01:42.271 net/ipn3ke: not in enabled drivers build config 00:01:42.271 net/ixgbe: not in enabled drivers build config 00:01:42.271 net/mana: not in enabled drivers build config 00:01:42.271 net/memif: not in enabled drivers build config 00:01:42.271 net/mlx4: not in enabled drivers build config 00:01:42.271 net/mlx5: not in enabled drivers build config 00:01:42.271 net/mvneta: not in enabled drivers build config 00:01:42.271 net/mvpp2: not in enabled drivers build config 00:01:42.271 net/netvsc: not in enabled drivers build config 00:01:42.271 net/nfb: not in enabled drivers build config 00:01:42.271 net/nfp: not in enabled drivers build config 00:01:42.271 net/ngbe: not in enabled drivers build config 00:01:42.271 net/null: not in enabled drivers build config 00:01:42.271 net/octeontx: not in enabled drivers build config 00:01:42.271 net/octeon_ep: not in enabled drivers build config 00:01:42.271 net/pcap: not in enabled drivers build config 00:01:42.271 net/pfe: not in enabled drivers build config 00:01:42.271 net/qede: not in enabled drivers build config 00:01:42.271 net/ring: not in enabled drivers build config 00:01:42.271 net/sfc: not in enabled drivers build config 00:01:42.271 net/softnic: not in enabled drivers build config 00:01:42.271 net/tap: not in enabled drivers build config 00:01:42.271 net/thunderx: not in enabled drivers build config 00:01:42.271 net/txgbe: not in enabled drivers build config 00:01:42.271 net/vdev_netvsc: not in enabled drivers build config 00:01:42.271 net/vhost: not in enabled drivers build config 00:01:42.271 net/virtio: not in enabled drivers build config 00:01:42.271 net/vmxnet3: not in enabled drivers build config 00:01:42.271 raw/*: missing internal dependency, "rawdev" 00:01:42.271 crypto/armv8: not in enabled drivers build config 00:01:42.271 crypto/bcmfs: not in enabled drivers build config 00:01:42.271 crypto/caam_jr: not in enabled drivers build config 00:01:42.271 crypto/ccp: not in enabled drivers build config 00:01:42.271 crypto/cnxk: not in enabled drivers build config 00:01:42.271 crypto/dpaa_sec: not in enabled drivers build config 00:01:42.271 crypto/dpaa2_sec: not in enabled drivers build config 00:01:42.271 crypto/ipsec_mb: not in enabled drivers build config 00:01:42.271 crypto/mlx5: not in enabled drivers build config 00:01:42.271 crypto/mvsam: not in enabled drivers build config 00:01:42.271 crypto/nitrox: not in enabled drivers build config 00:01:42.271 crypto/null: not in enabled drivers build config 00:01:42.271 crypto/octeontx: not in enabled drivers build config 00:01:42.271 crypto/openssl: not in enabled drivers build config 00:01:42.271 crypto/scheduler: not in enabled drivers build config 00:01:42.271 crypto/uadk: not in enabled drivers build config 00:01:42.271 crypto/virtio: not in enabled drivers build config 00:01:42.271 compress/isal: not in enabled drivers build config 00:01:42.271 compress/mlx5: not in enabled drivers build config 00:01:42.271 compress/nitrox: not in enabled drivers build config 00:01:42.271 compress/octeontx: not in enabled drivers build config 00:01:42.271 compress/zlib: not in enabled drivers build config 00:01:42.271 regex/*: missing internal dependency, "regexdev" 00:01:42.271 ml/*: missing internal dependency, "mldev" 00:01:42.271 vdpa/ifc: not in enabled drivers build config 00:01:42.271 vdpa/mlx5: not in enabled drivers build config 00:01:42.271 vdpa/nfp: not in enabled drivers build config 00:01:42.271 vdpa/sfc: not in enabled drivers build config 00:01:42.271 event/*: missing internal dependency, "eventdev" 00:01:42.271 baseband/*: missing internal dependency, "bbdev" 00:01:42.271 gpu/*: missing internal dependency, "gpudev" 00:01:42.271 00:01:42.271 00:01:42.271 Build targets in project: 84 00:01:42.271 00:01:42.271 DPDK 24.03.0 00:01:42.271 00:01:42.271 User defined options 00:01:42.271 buildtype : debug 00:01:42.271 default_library : shared 00:01:42.271 libdir : lib 00:01:42.271 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:42.271 b_sanitize : address 00:01:42.271 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:42.271 c_link_args : 00:01:42.271 cpu_instruction_set: native 00:01:42.271 disable_apps : test-bbdev,test-pipeline,test-acl,test-gpudev,test-security-perf,test,test-dma-perf,test-regex,test-compress-perf,test-eventdev,graph,proc-info,test-pmd,test-crypto-perf,test-cmdline,test-fib,pdump,test-sad,test-flow-perf,test-mldev,dumpcap 00:01:42.271 disable_libs : metrics,node,acl,pdcp,gro,table,ipsec,pcapng,efd,dispatcher,gpudev,regexdev,bitratestats,argparse,port,rib,bpf,cfgfile,stack,graph,rawdev,distributor,lpm,sched,ip_frag,jobstats,pdump,pipeline,eventdev,mldev,member,gso,latencystats,fib,bbdev 00:01:42.271 enable_docs : false 00:01:42.271 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:42.271 enable_kmods : false 00:01:42.271 max_lcores : 128 00:01:42.271 tests : false 00:01:42.271 00:01:42.271 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:42.271 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:42.271 [1/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:42.271 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:42.271 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:42.271 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:42.271 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:42.271 [6/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:42.271 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:42.271 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:42.271 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:42.271 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:42.271 [11/267] Linking static target lib/librte_kvargs.a 00:01:42.271 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:42.271 [13/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:42.271 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:42.271 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:42.271 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:42.271 [17/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:42.271 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:42.272 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:42.272 [20/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:42.272 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:42.272 [22/267] Linking static target lib/librte_log.a 00:01:42.272 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:42.272 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:42.272 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:42.272 [26/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:42.272 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:42.272 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:42.272 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:42.272 [30/267] Linking static target lib/librte_pci.a 00:01:42.272 [31/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:42.272 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:42.272 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:42.272 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:42.272 [35/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:42.272 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:42.272 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:42.272 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:42.272 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.272 [40/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:42.272 [41/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.272 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:42.272 [43/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:42.272 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:42.272 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:42.272 [46/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:42.272 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:42.272 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:42.272 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:42.272 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:42.272 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:42.272 [52/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:42.272 [53/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:42.272 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:42.272 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:42.272 [56/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:42.272 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:42.272 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:42.272 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:42.272 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:42.272 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:42.272 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:42.272 [63/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:42.272 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:42.272 [65/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:42.272 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:42.272 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:42.272 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:42.272 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:42.272 [70/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:42.272 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:42.272 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:42.272 [73/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:42.272 [74/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:42.272 [75/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:42.272 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:42.272 [77/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:42.272 [78/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:42.272 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:42.272 [80/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:42.272 [81/267] Linking static target lib/librte_telemetry.a 00:01:42.272 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:42.272 [83/267] Linking static target lib/librte_meter.a 00:01:42.272 [84/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:42.272 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:42.272 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:42.272 [87/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:42.272 [88/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:42.272 [89/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:42.272 [90/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:42.272 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:42.272 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:42.272 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:42.272 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:42.272 [95/267] Linking static target lib/librte_ring.a 00:01:42.272 [96/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:42.272 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:42.272 [98/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:42.272 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:42.272 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:42.272 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:42.272 [102/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:42.272 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:42.272 [104/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:42.272 [105/267] Linking static target lib/librte_cmdline.a 00:01:42.272 [106/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:42.272 [107/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:42.272 [108/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:42.272 [109/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:42.272 [110/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:42.272 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:42.272 [112/267] Linking static target lib/librte_timer.a 00:01:42.272 [113/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:42.272 [114/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:42.272 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:42.272 [116/267] Linking static target lib/librte_dmadev.a 00:01:42.272 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:42.272 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:42.272 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:42.272 [120/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:42.272 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:42.272 [122/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:42.272 [123/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:42.272 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:42.272 [125/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:42.272 [126/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:42.272 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:42.272 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:42.272 [129/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:42.272 [130/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:42.272 [131/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:42.272 [132/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:42.272 [133/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:42.532 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:42.532 [135/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.532 [136/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:42.532 [137/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:42.532 [138/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:42.532 [139/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:42.532 [140/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:42.532 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:42.532 [142/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:42.532 [143/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:42.532 [144/267] Linking target lib/librte_log.so.24.1 00:01:42.532 [145/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:42.532 [146/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:42.532 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:42.532 [148/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:42.532 [149/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:42.532 [150/267] Linking static target lib/librte_reorder.a 00:01:42.532 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:42.532 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:42.532 [153/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:42.532 [154/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:42.532 [155/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:42.532 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:42.532 [157/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:42.533 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:42.533 [159/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.533 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:42.533 [161/267] Linking static target lib/librte_power.a 00:01:42.533 [162/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:42.533 [163/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:42.533 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:42.533 [165/267] Linking static target lib/librte_rcu.a 00:01:42.533 [166/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.533 [167/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:42.533 [168/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.533 [169/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:42.533 [170/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:42.533 [171/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:42.533 [172/267] Linking static target drivers/librte_bus_vdev.a 00:01:42.533 [173/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.533 [174/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:42.533 [175/267] Linking static target lib/librte_mempool.a 00:01:42.533 [176/267] Linking static target lib/librte_compressdev.a 00:01:42.533 [177/267] Linking target lib/librte_kvargs.so.24.1 00:01:42.533 [178/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:42.533 [179/267] Linking static target lib/librte_net.a 00:01:42.533 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:42.533 [181/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:42.533 [182/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:42.533 [183/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:42.533 [184/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:42.792 [185/267] Linking static target lib/librte_security.a 00:01:42.792 [186/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:42.792 [187/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:42.792 [188/267] Linking static target lib/librte_eal.a 00:01:42.792 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:42.792 [190/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:42.792 [191/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:42.792 [192/267] Linking static target drivers/librte_bus_pci.a 00:01:42.792 [193/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:42.792 [194/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.792 [195/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.792 [196/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:42.792 [197/267] Linking static target lib/librte_hash.a 00:01:42.792 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:42.792 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:42.792 [200/267] Linking target lib/librte_telemetry.so.24.1 00:01:42.792 [201/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:42.792 [202/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:42.792 [203/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.792 [204/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:43.053 [205/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:43.053 [206/267] Linking static target lib/librte_mbuf.a 00:01:43.053 [207/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.053 [208/267] Linking static target drivers/librte_mempool_ring.a 00:01:43.053 [209/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.053 [210/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:43.053 [211/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.053 [212/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.313 [213/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:43.313 [214/267] Linking static target lib/librte_cryptodev.a 00:01:43.313 [215/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:43.313 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.313 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.572 [218/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.572 [219/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.572 [220/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.572 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.832 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.832 [223/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.093 [224/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:44.093 [225/267] Linking static target lib/librte_ethdev.a 00:01:44.353 [226/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:45.777 [227/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.159 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:47.159 [229/267] Linking static target lib/librte_vhost.a 00:01:49.075 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.295 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.925 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.202 [233/267] Linking target lib/librte_eal.so.24.1 00:01:54.202 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:54.202 [235/267] Linking target lib/librte_pci.so.24.1 00:01:54.203 [236/267] Linking target lib/librte_ring.so.24.1 00:01:54.203 [237/267] Linking target lib/librte_meter.so.24.1 00:01:54.203 [238/267] Linking target lib/librte_timer.so.24.1 00:01:54.203 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:54.203 [240/267] Linking target lib/librte_dmadev.so.24.1 00:01:54.463 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:54.463 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:54.463 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:54.463 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:54.463 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:54.463 [246/267] Linking target lib/librte_mempool.so.24.1 00:01:54.463 [247/267] Linking target lib/librte_rcu.so.24.1 00:01:54.463 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:54.463 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:54.725 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:54.725 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:54.725 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:54.725 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:54.725 [254/267] Linking target lib/librte_compressdev.so.24.1 00:01:54.725 [255/267] Linking target lib/librte_reorder.so.24.1 00:01:54.725 [256/267] Linking target lib/librte_net.so.24.1 00:01:54.725 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:54.985 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:54.985 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:54.985 [260/267] Linking target lib/librte_hash.so.24.1 00:01:54.985 [261/267] Linking target lib/librte_cmdline.so.24.1 00:01:54.985 [262/267] Linking target lib/librte_security.so.24.1 00:01:54.985 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:54.985 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:55.245 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:55.245 [266/267] Linking target lib/librte_power.so.24.1 00:01:55.245 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:55.245 INFO: autodetecting backend as ninja 00:01:55.245 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:59.446 CC lib/ut/ut.o 00:01:59.446 CC lib/log/log.o 00:01:59.446 CC lib/log/log_flags.o 00:01:59.446 CC lib/log/log_deprecated.o 00:01:59.446 CC lib/ut_mock/mock.o 00:01:59.446 LIB libspdk_ut.a 00:01:59.446 LIB libspdk_ut_mock.a 00:01:59.446 LIB libspdk_log.a 00:01:59.446 SO libspdk_ut_mock.so.6.0 00:01:59.446 SO libspdk_ut.so.2.0 00:01:59.446 SO libspdk_log.so.7.1 00:01:59.446 SYMLINK libspdk_ut.so 00:01:59.446 SYMLINK libspdk_ut_mock.so 00:01:59.446 SYMLINK libspdk_log.so 00:01:59.707 CC lib/ioat/ioat.o 00:01:59.707 CXX lib/trace_parser/trace.o 00:01:59.707 CC lib/util/base64.o 00:01:59.967 CC lib/util/bit_array.o 00:01:59.967 CC lib/util/cpuset.o 00:01:59.967 CC lib/util/crc16.o 00:01:59.967 CC lib/util/crc32.o 00:01:59.967 CC lib/util/crc32c.o 00:01:59.967 CC lib/util/dif.o 00:01:59.967 CC lib/util/crc32_ieee.o 00:01:59.967 CC lib/dma/dma.o 00:01:59.967 CC lib/util/crc64.o 00:01:59.967 CC lib/util/fd.o 00:01:59.967 CC lib/util/fd_group.o 00:01:59.967 CC lib/util/file.o 00:01:59.967 CC lib/util/hexlify.o 00:01:59.967 CC lib/util/iov.o 00:01:59.967 CC lib/util/math.o 00:01:59.967 CC lib/util/net.o 00:01:59.967 CC lib/util/pipe.o 00:01:59.967 CC lib/util/strerror_tls.o 00:01:59.967 CC lib/util/string.o 00:01:59.967 CC lib/util/uuid.o 00:01:59.967 CC lib/util/xor.o 00:01:59.967 CC lib/util/zipf.o 00:01:59.967 CC lib/util/md5.o 00:01:59.967 CC lib/vfio_user/host/vfio_user_pci.o 00:01:59.967 CC lib/vfio_user/host/vfio_user.o 00:01:59.967 LIB libspdk_dma.a 00:02:00.227 SO libspdk_dma.so.5.0 00:02:00.227 LIB libspdk_ioat.a 00:02:00.227 SO libspdk_ioat.so.7.0 00:02:00.227 SYMLINK libspdk_dma.so 00:02:00.227 SYMLINK libspdk_ioat.so 00:02:00.227 LIB libspdk_vfio_user.a 00:02:00.227 SO libspdk_vfio_user.so.5.0 00:02:00.487 SYMLINK libspdk_vfio_user.so 00:02:00.487 LIB libspdk_trace_parser.a 00:02:00.487 SO libspdk_trace_parser.so.6.0 00:02:00.487 LIB libspdk_util.a 00:02:00.487 SO libspdk_util.so.10.1 00:02:00.747 SYMLINK libspdk_trace_parser.so 00:02:00.747 SYMLINK libspdk_util.so 00:02:01.007 CC lib/conf/conf.o 00:02:01.007 CC lib/json/json_parse.o 00:02:01.007 CC lib/rdma_utils/rdma_utils.o 00:02:01.007 CC lib/json/json_util.o 00:02:01.007 CC lib/json/json_write.o 00:02:01.007 CC lib/vmd/vmd.o 00:02:01.007 CC lib/vmd/led.o 00:02:01.007 CC lib/idxd/idxd.o 00:02:01.007 CC lib/env_dpdk/env.o 00:02:01.007 CC lib/idxd/idxd_user.o 00:02:01.007 CC lib/env_dpdk/memory.o 00:02:01.007 CC lib/idxd/idxd_kernel.o 00:02:01.007 CC lib/env_dpdk/pci.o 00:02:01.007 CC lib/env_dpdk/init.o 00:02:01.007 CC lib/env_dpdk/threads.o 00:02:01.007 CC lib/env_dpdk/pci_ioat.o 00:02:01.007 CC lib/env_dpdk/pci_virtio.o 00:02:01.007 CC lib/env_dpdk/pci_vmd.o 00:02:01.007 CC lib/env_dpdk/pci_idxd.o 00:02:01.007 CC lib/env_dpdk/pci_event.o 00:02:01.007 CC lib/env_dpdk/sigbus_handler.o 00:02:01.007 CC lib/env_dpdk/pci_dpdk.o 00:02:01.007 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:01.007 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:01.267 LIB libspdk_conf.a 00:02:01.267 LIB libspdk_rdma_utils.a 00:02:01.267 SO libspdk_conf.so.6.0 00:02:01.529 SO libspdk_rdma_utils.so.1.0 00:02:01.529 SYMLINK libspdk_conf.so 00:02:01.529 LIB libspdk_json.a 00:02:01.529 SYMLINK libspdk_rdma_utils.so 00:02:01.529 SO libspdk_json.so.6.0 00:02:01.529 SYMLINK libspdk_json.so 00:02:01.789 CC lib/rdma_provider/common.o 00:02:01.789 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:01.789 LIB libspdk_idxd.a 00:02:01.789 LIB libspdk_vmd.a 00:02:01.789 SO libspdk_idxd.so.12.1 00:02:02.050 SO libspdk_vmd.so.6.0 00:02:02.050 CC lib/jsonrpc/jsonrpc_server.o 00:02:02.050 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:02.050 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:02.050 CC lib/jsonrpc/jsonrpc_client.o 00:02:02.050 SYMLINK libspdk_idxd.so 00:02:02.050 SYMLINK libspdk_vmd.so 00:02:02.050 LIB libspdk_rdma_provider.a 00:02:02.050 SO libspdk_rdma_provider.so.7.0 00:02:02.310 SYMLINK libspdk_rdma_provider.so 00:02:02.310 LIB libspdk_jsonrpc.a 00:02:02.310 SO libspdk_jsonrpc.so.6.0 00:02:02.310 SYMLINK libspdk_jsonrpc.so 00:02:02.570 LIB libspdk_env_dpdk.a 00:02:02.831 CC lib/rpc/rpc.o 00:02:02.831 SO libspdk_env_dpdk.so.15.1 00:02:02.831 SYMLINK libspdk_env_dpdk.so 00:02:03.092 LIB libspdk_rpc.a 00:02:03.092 SO libspdk_rpc.so.6.0 00:02:03.092 SYMLINK libspdk_rpc.so 00:02:03.354 CC lib/trace/trace.o 00:02:03.354 CC lib/trace/trace_flags.o 00:02:03.354 CC lib/notify/notify.o 00:02:03.354 CC lib/keyring/keyring.o 00:02:03.354 CC lib/trace/trace_rpc.o 00:02:03.354 CC lib/notify/notify_rpc.o 00:02:03.354 CC lib/keyring/keyring_rpc.o 00:02:03.616 LIB libspdk_notify.a 00:02:03.616 SO libspdk_notify.so.6.0 00:02:03.616 LIB libspdk_keyring.a 00:02:03.616 LIB libspdk_trace.a 00:02:03.617 SYMLINK libspdk_notify.so 00:02:03.617 SO libspdk_keyring.so.2.0 00:02:03.878 SO libspdk_trace.so.11.0 00:02:03.878 SYMLINK libspdk_keyring.so 00:02:03.878 SYMLINK libspdk_trace.so 00:02:04.140 CC lib/thread/thread.o 00:02:04.141 CC lib/thread/iobuf.o 00:02:04.141 CC lib/sock/sock.o 00:02:04.141 CC lib/sock/sock_rpc.o 00:02:04.713 LIB libspdk_sock.a 00:02:04.713 SO libspdk_sock.so.10.0 00:02:04.713 SYMLINK libspdk_sock.so 00:02:05.286 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:05.286 CC lib/nvme/nvme_fabric.o 00:02:05.286 CC lib/nvme/nvme_ctrlr.o 00:02:05.286 CC lib/nvme/nvme_ns_cmd.o 00:02:05.286 CC lib/nvme/nvme_ns.o 00:02:05.286 CC lib/nvme/nvme_pcie_common.o 00:02:05.286 CC lib/nvme/nvme_pcie.o 00:02:05.286 CC lib/nvme/nvme_qpair.o 00:02:05.286 CC lib/nvme/nvme.o 00:02:05.286 CC lib/nvme/nvme_quirks.o 00:02:05.286 CC lib/nvme/nvme_transport.o 00:02:05.286 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:05.286 CC lib/nvme/nvme_discovery.o 00:02:05.286 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:05.286 CC lib/nvme/nvme_tcp.o 00:02:05.286 CC lib/nvme/nvme_opal.o 00:02:05.286 CC lib/nvme/nvme_io_msg.o 00:02:05.286 CC lib/nvme/nvme_poll_group.o 00:02:05.286 CC lib/nvme/nvme_zns.o 00:02:05.286 CC lib/nvme/nvme_stubs.o 00:02:05.286 CC lib/nvme/nvme_auth.o 00:02:05.286 CC lib/nvme/nvme_cuse.o 00:02:05.286 CC lib/nvme/nvme_rdma.o 00:02:05.863 LIB libspdk_thread.a 00:02:05.863 SO libspdk_thread.so.11.0 00:02:05.863 SYMLINK libspdk_thread.so 00:02:06.435 CC lib/virtio/virtio.o 00:02:06.435 CC lib/virtio/virtio_vhost_user.o 00:02:06.435 CC lib/virtio/virtio_vfio_user.o 00:02:06.435 CC lib/virtio/virtio_pci.o 00:02:06.435 CC lib/blob/blobstore.o 00:02:06.435 CC lib/init/json_config.o 00:02:06.435 CC lib/blob/request.o 00:02:06.435 CC lib/blob/zeroes.o 00:02:06.435 CC lib/init/subsystem.o 00:02:06.435 CC lib/accel/accel.o 00:02:06.435 CC lib/blob/blob_bs_dev.o 00:02:06.435 CC lib/init/subsystem_rpc.o 00:02:06.435 CC lib/accel/accel_rpc.o 00:02:06.435 CC lib/init/rpc.o 00:02:06.435 CC lib/accel/accel_sw.o 00:02:06.435 CC lib/fsdev/fsdev.o 00:02:06.435 CC lib/fsdev/fsdev_io.o 00:02:06.435 CC lib/fsdev/fsdev_rpc.o 00:02:06.694 LIB libspdk_init.a 00:02:06.694 SO libspdk_init.so.6.0 00:02:06.694 LIB libspdk_virtio.a 00:02:06.694 SO libspdk_virtio.so.7.0 00:02:06.694 SYMLINK libspdk_init.so 00:02:06.954 SYMLINK libspdk_virtio.so 00:02:06.954 LIB libspdk_fsdev.a 00:02:07.214 SO libspdk_fsdev.so.2.0 00:02:07.214 CC lib/event/app.o 00:02:07.214 CC lib/event/reactor.o 00:02:07.214 CC lib/event/app_rpc.o 00:02:07.214 CC lib/event/log_rpc.o 00:02:07.214 CC lib/event/scheduler_static.o 00:02:07.214 SYMLINK libspdk_fsdev.so 00:02:07.476 LIB libspdk_nvme.a 00:02:07.476 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:07.476 LIB libspdk_accel.a 00:02:07.476 SO libspdk_accel.so.16.0 00:02:07.737 SO libspdk_nvme.so.15.0 00:02:07.737 LIB libspdk_event.a 00:02:07.737 SO libspdk_event.so.14.0 00:02:07.737 SYMLINK libspdk_accel.so 00:02:07.737 SYMLINK libspdk_event.so 00:02:07.999 SYMLINK libspdk_nvme.so 00:02:07.999 CC lib/bdev/bdev.o 00:02:07.999 CC lib/bdev/bdev_rpc.o 00:02:07.999 CC lib/bdev/bdev_zone.o 00:02:07.999 CC lib/bdev/part.o 00:02:07.999 CC lib/bdev/scsi_nvme.o 00:02:08.260 LIB libspdk_fuse_dispatcher.a 00:02:08.260 SO libspdk_fuse_dispatcher.so.1.0 00:02:08.260 SYMLINK libspdk_fuse_dispatcher.so 00:02:10.175 LIB libspdk_blob.a 00:02:10.175 SO libspdk_blob.so.12.0 00:02:10.175 SYMLINK libspdk_blob.so 00:02:10.436 CC lib/blobfs/blobfs.o 00:02:10.436 CC lib/blobfs/tree.o 00:02:10.436 CC lib/lvol/lvol.o 00:02:11.381 LIB libspdk_bdev.a 00:02:11.381 SO libspdk_bdev.so.17.0 00:02:11.381 LIB libspdk_blobfs.a 00:02:11.381 SYMLINK libspdk_bdev.so 00:02:11.381 SO libspdk_blobfs.so.11.0 00:02:11.381 LIB libspdk_lvol.a 00:02:11.381 SYMLINK libspdk_blobfs.so 00:02:11.642 SO libspdk_lvol.so.11.0 00:02:11.642 SYMLINK libspdk_lvol.so 00:02:11.642 CC lib/scsi/dev.o 00:02:11.642 CC lib/scsi/lun.o 00:02:11.642 CC lib/scsi/port.o 00:02:11.642 CC lib/scsi/scsi.o 00:02:11.642 CC lib/ublk/ublk.o 00:02:11.642 CC lib/scsi/scsi_bdev.o 00:02:11.642 CC lib/nvmf/ctrlr.o 00:02:11.642 CC lib/scsi/scsi_pr.o 00:02:11.642 CC lib/ublk/ublk_rpc.o 00:02:11.642 CC lib/nvmf/ctrlr_discovery.o 00:02:11.642 CC lib/scsi/scsi_rpc.o 00:02:11.642 CC lib/nvmf/ctrlr_bdev.o 00:02:11.642 CC lib/scsi/task.o 00:02:11.642 CC lib/ftl/ftl_core.o 00:02:11.642 CC lib/nvmf/subsystem.o 00:02:11.642 CC lib/nvmf/nvmf.o 00:02:11.642 CC lib/nbd/nbd_rpc.o 00:02:11.642 CC lib/ftl/ftl_init.o 00:02:11.642 CC lib/nvmf/nvmf_rpc.o 00:02:11.642 CC lib/nbd/nbd.o 00:02:11.642 CC lib/ftl/ftl_layout.o 00:02:11.642 CC lib/nvmf/transport.o 00:02:11.642 CC lib/ftl/ftl_debug.o 00:02:11.642 CC lib/nvmf/tcp.o 00:02:11.642 CC lib/ftl/ftl_io.o 00:02:11.642 CC lib/nvmf/stubs.o 00:02:11.642 CC lib/ftl/ftl_sb.o 00:02:11.642 CC lib/ftl/ftl_l2p.o 00:02:11.642 CC lib/nvmf/mdns_server.o 00:02:11.642 CC lib/ftl/ftl_l2p_flat.o 00:02:11.642 CC lib/nvmf/rdma.o 00:02:11.902 CC lib/ftl/ftl_band_ops.o 00:02:11.902 CC lib/nvmf/auth.o 00:02:11.902 CC lib/ftl/ftl_nv_cache.o 00:02:11.902 CC lib/ftl/ftl_band.o 00:02:11.902 CC lib/ftl/ftl_writer.o 00:02:11.902 CC lib/ftl/ftl_rq.o 00:02:11.902 CC lib/ftl/ftl_reloc.o 00:02:11.902 CC lib/ftl/ftl_l2p_cache.o 00:02:11.902 CC lib/ftl/mngt/ftl_mngt.o 00:02:11.902 CC lib/ftl/ftl_p2l.o 00:02:11.902 CC lib/ftl/ftl_p2l_log.o 00:02:11.902 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:11.902 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:11.902 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:11.902 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:11.902 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:11.902 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:11.902 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:11.902 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:11.902 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:11.902 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:11.902 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:11.902 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:11.902 CC lib/ftl/utils/ftl_conf.o 00:02:11.902 CC lib/ftl/utils/ftl_md.o 00:02:11.902 CC lib/ftl/utils/ftl_mempool.o 00:02:11.902 CC lib/ftl/utils/ftl_bitmap.o 00:02:11.902 CC lib/ftl/utils/ftl_property.o 00:02:11.902 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:11.902 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:11.902 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:11.902 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:11.902 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:11.902 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:11.902 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:11.902 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:11.902 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:11.902 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:11.902 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:11.902 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:11.902 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:11.902 CC lib/ftl/base/ftl_base_dev.o 00:02:11.902 CC lib/ftl/ftl_trace.o 00:02:11.902 CC lib/ftl/base/ftl_base_bdev.o 00:02:12.473 LIB libspdk_nbd.a 00:02:12.473 SO libspdk_nbd.so.7.0 00:02:12.473 LIB libspdk_scsi.a 00:02:12.473 SO libspdk_scsi.so.9.0 00:02:12.473 SYMLINK libspdk_nbd.so 00:02:12.473 SYMLINK libspdk_scsi.so 00:02:12.735 LIB libspdk_ublk.a 00:02:12.735 SO libspdk_ublk.so.3.0 00:02:12.735 SYMLINK libspdk_ublk.so 00:02:12.995 CC lib/vhost/vhost.o 00:02:12.995 CC lib/vhost/vhost_rpc.o 00:02:12.995 CC lib/iscsi/conn.o 00:02:12.995 CC lib/vhost/vhost_scsi.o 00:02:12.995 CC lib/iscsi/init_grp.o 00:02:12.995 CC lib/vhost/vhost_blk.o 00:02:12.995 CC lib/iscsi/iscsi.o 00:02:12.995 CC lib/vhost/rte_vhost_user.o 00:02:12.995 CC lib/iscsi/param.o 00:02:12.995 CC lib/iscsi/portal_grp.o 00:02:12.995 CC lib/iscsi/tgt_node.o 00:02:12.995 CC lib/iscsi/iscsi_subsystem.o 00:02:12.995 CC lib/iscsi/iscsi_rpc.o 00:02:12.995 CC lib/iscsi/task.o 00:02:12.995 LIB libspdk_ftl.a 00:02:13.256 SO libspdk_ftl.so.9.0 00:02:13.518 SYMLINK libspdk_ftl.so 00:02:14.090 LIB libspdk_vhost.a 00:02:14.090 SO libspdk_vhost.so.8.0 00:02:14.090 SYMLINK libspdk_vhost.so 00:02:14.352 LIB libspdk_nvmf.a 00:02:14.352 SO libspdk_nvmf.so.20.0 00:02:14.612 LIB libspdk_iscsi.a 00:02:14.612 SO libspdk_iscsi.so.8.0 00:02:14.612 SYMLINK libspdk_nvmf.so 00:02:14.612 SYMLINK libspdk_iscsi.so 00:02:15.184 CC module/env_dpdk/env_dpdk_rpc.o 00:02:15.445 CC module/fsdev/aio/fsdev_aio.o 00:02:15.445 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:15.445 CC module/fsdev/aio/linux_aio_mgr.o 00:02:15.445 CC module/sock/posix/posix.o 00:02:15.445 LIB libspdk_env_dpdk_rpc.a 00:02:15.445 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:15.445 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:15.445 CC module/keyring/linux/keyring.o 00:02:15.446 CC module/scheduler/gscheduler/gscheduler.o 00:02:15.446 CC module/keyring/linux/keyring_rpc.o 00:02:15.446 CC module/accel/ioat/accel_ioat.o 00:02:15.446 CC module/blob/bdev/blob_bdev.o 00:02:15.446 CC module/accel/iaa/accel_iaa.o 00:02:15.446 CC module/accel/ioat/accel_ioat_rpc.o 00:02:15.446 CC module/accel/iaa/accel_iaa_rpc.o 00:02:15.446 CC module/keyring/file/keyring.o 00:02:15.446 CC module/accel/dsa/accel_dsa.o 00:02:15.446 CC module/keyring/file/keyring_rpc.o 00:02:15.446 CC module/accel/dsa/accel_dsa_rpc.o 00:02:15.446 CC module/accel/error/accel_error.o 00:02:15.446 CC module/accel/error/accel_error_rpc.o 00:02:15.446 SO libspdk_env_dpdk_rpc.so.6.0 00:02:15.446 SYMLINK libspdk_env_dpdk_rpc.so 00:02:15.707 LIB libspdk_keyring_linux.a 00:02:15.707 LIB libspdk_scheduler_dpdk_governor.a 00:02:15.707 LIB libspdk_scheduler_gscheduler.a 00:02:15.707 LIB libspdk_keyring_file.a 00:02:15.707 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:15.707 SO libspdk_scheduler_gscheduler.so.4.0 00:02:15.707 SO libspdk_keyring_linux.so.1.0 00:02:15.707 LIB libspdk_accel_ioat.a 00:02:15.707 SO libspdk_keyring_file.so.2.0 00:02:15.707 LIB libspdk_scheduler_dynamic.a 00:02:15.707 LIB libspdk_accel_iaa.a 00:02:15.707 LIB libspdk_accel_error.a 00:02:15.707 SO libspdk_accel_ioat.so.6.0 00:02:15.707 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:15.707 SO libspdk_scheduler_dynamic.so.4.0 00:02:15.707 SYMLINK libspdk_scheduler_gscheduler.so 00:02:15.707 SYMLINK libspdk_keyring_linux.so 00:02:15.707 SO libspdk_accel_iaa.so.3.0 00:02:15.707 SYMLINK libspdk_keyring_file.so 00:02:15.707 SO libspdk_accel_error.so.2.0 00:02:15.707 LIB libspdk_accel_dsa.a 00:02:15.707 SYMLINK libspdk_accel_ioat.so 00:02:15.707 LIB libspdk_blob_bdev.a 00:02:15.967 SYMLINK libspdk_scheduler_dynamic.so 00:02:15.967 SYMLINK libspdk_accel_iaa.so 00:02:15.967 SO libspdk_accel_dsa.so.5.0 00:02:15.967 SO libspdk_blob_bdev.so.12.0 00:02:15.967 SYMLINK libspdk_accel_error.so 00:02:15.967 SYMLINK libspdk_blob_bdev.so 00:02:15.967 SYMLINK libspdk_accel_dsa.so 00:02:16.227 LIB libspdk_fsdev_aio.a 00:02:16.227 SO libspdk_fsdev_aio.so.1.0 00:02:16.227 LIB libspdk_sock_posix.a 00:02:16.487 SO libspdk_sock_posix.so.6.0 00:02:16.487 SYMLINK libspdk_fsdev_aio.so 00:02:16.487 CC module/bdev/null/bdev_null.o 00:02:16.487 CC module/bdev/null/bdev_null_rpc.o 00:02:16.487 CC module/bdev/nvme/bdev_nvme.o 00:02:16.487 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:16.487 CC module/bdev/nvme/nvme_rpc.o 00:02:16.487 CC module/bdev/nvme/bdev_mdns_client.o 00:02:16.487 CC module/bdev/delay/vbdev_delay.o 00:02:16.487 CC module/bdev/nvme/vbdev_opal.o 00:02:16.487 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:16.487 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:16.487 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:16.487 CC module/bdev/error/vbdev_error.o 00:02:16.487 CC module/bdev/error/vbdev_error_rpc.o 00:02:16.487 CC module/bdev/passthru/vbdev_passthru.o 00:02:16.487 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:16.487 CC module/bdev/lvol/vbdev_lvol.o 00:02:16.487 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:16.487 CC module/bdev/gpt/gpt.o 00:02:16.487 SYMLINK libspdk_sock_posix.so 00:02:16.487 CC module/bdev/gpt/vbdev_gpt.o 00:02:16.487 CC module/bdev/iscsi/bdev_iscsi.o 00:02:16.487 CC module/bdev/raid/bdev_raid.o 00:02:16.487 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:16.487 CC module/bdev/malloc/bdev_malloc.o 00:02:16.487 CC module/bdev/raid/bdev_raid_rpc.o 00:02:16.487 CC module/bdev/raid/bdev_raid_sb.o 00:02:16.487 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:16.487 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:16.487 CC module/bdev/raid/raid0.o 00:02:16.487 CC module/bdev/raid/raid1.o 00:02:16.487 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:16.487 CC module/bdev/raid/concat.o 00:02:16.487 CC module/bdev/ftl/bdev_ftl.o 00:02:16.487 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:16.487 CC module/bdev/split/vbdev_split_rpc.o 00:02:16.487 CC module/bdev/aio/bdev_aio.o 00:02:16.487 CC module/bdev/aio/bdev_aio_rpc.o 00:02:16.487 CC module/bdev/split/vbdev_split.o 00:02:16.487 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:16.487 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:16.487 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:16.487 CC module/blobfs/bdev/blobfs_bdev.o 00:02:16.487 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:16.747 LIB libspdk_blobfs_bdev.a 00:02:16.747 SO libspdk_blobfs_bdev.so.6.0 00:02:16.747 LIB libspdk_bdev_error.a 00:02:16.747 LIB libspdk_bdev_split.a 00:02:16.747 LIB libspdk_bdev_null.a 00:02:16.747 LIB libspdk_bdev_gpt.a 00:02:16.747 SO libspdk_bdev_error.so.6.0 00:02:16.747 SO libspdk_bdev_split.so.6.0 00:02:16.747 SO libspdk_bdev_gpt.so.6.0 00:02:16.747 SO libspdk_bdev_null.so.6.0 00:02:16.747 LIB libspdk_bdev_ftl.a 00:02:16.747 SYMLINK libspdk_blobfs_bdev.so 00:02:16.747 LIB libspdk_bdev_passthru.a 00:02:16.747 SO libspdk_bdev_ftl.so.6.0 00:02:17.007 SO libspdk_bdev_passthru.so.6.0 00:02:17.008 LIB libspdk_bdev_zone_block.a 00:02:17.008 SYMLINK libspdk_bdev_error.so 00:02:17.008 SYMLINK libspdk_bdev_split.so 00:02:17.008 LIB libspdk_bdev_aio.a 00:02:17.008 SYMLINK libspdk_bdev_gpt.so 00:02:17.008 LIB libspdk_bdev_delay.a 00:02:17.008 SYMLINK libspdk_bdev_null.so 00:02:17.008 LIB libspdk_bdev_iscsi.a 00:02:17.008 SO libspdk_bdev_zone_block.so.6.0 00:02:17.008 LIB libspdk_bdev_malloc.a 00:02:17.008 SO libspdk_bdev_aio.so.6.0 00:02:17.008 SYMLINK libspdk_bdev_passthru.so 00:02:17.008 SO libspdk_bdev_delay.so.6.0 00:02:17.008 SYMLINK libspdk_bdev_ftl.so 00:02:17.008 SO libspdk_bdev_iscsi.so.6.0 00:02:17.008 SO libspdk_bdev_malloc.so.6.0 00:02:17.008 SYMLINK libspdk_bdev_zone_block.so 00:02:17.008 SYMLINK libspdk_bdev_aio.so 00:02:17.008 SYMLINK libspdk_bdev_delay.so 00:02:17.008 SYMLINK libspdk_bdev_iscsi.so 00:02:17.008 SYMLINK libspdk_bdev_malloc.so 00:02:17.008 LIB libspdk_bdev_lvol.a 00:02:17.008 LIB libspdk_bdev_virtio.a 00:02:17.008 SO libspdk_bdev_lvol.so.6.0 00:02:17.267 SO libspdk_bdev_virtio.so.6.0 00:02:17.267 SYMLINK libspdk_bdev_lvol.so 00:02:17.267 SYMLINK libspdk_bdev_virtio.so 00:02:17.839 LIB libspdk_bdev_raid.a 00:02:17.839 SO libspdk_bdev_raid.so.6.0 00:02:17.839 SYMLINK libspdk_bdev_raid.so 00:02:19.751 LIB libspdk_bdev_nvme.a 00:02:19.751 SO libspdk_bdev_nvme.so.7.1 00:02:19.751 SYMLINK libspdk_bdev_nvme.so 00:02:20.323 CC module/event/subsystems/iobuf/iobuf.o 00:02:20.323 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:20.323 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:20.323 CC module/event/subsystems/vmd/vmd.o 00:02:20.323 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:20.323 CC module/event/subsystems/sock/sock.o 00:02:20.323 CC module/event/subsystems/keyring/keyring.o 00:02:20.323 CC module/event/subsystems/scheduler/scheduler.o 00:02:20.323 CC module/event/subsystems/fsdev/fsdev.o 00:02:20.585 LIB libspdk_event_iobuf.a 00:02:20.585 LIB libspdk_event_scheduler.a 00:02:20.585 LIB libspdk_event_vhost_blk.a 00:02:20.585 LIB libspdk_event_keyring.a 00:02:20.585 LIB libspdk_event_sock.a 00:02:20.585 LIB libspdk_event_vmd.a 00:02:20.585 LIB libspdk_event_fsdev.a 00:02:20.585 SO libspdk_event_iobuf.so.3.0 00:02:20.585 SO libspdk_event_vhost_blk.so.3.0 00:02:20.585 SO libspdk_event_scheduler.so.4.0 00:02:20.585 SO libspdk_event_keyring.so.1.0 00:02:20.585 SO libspdk_event_sock.so.5.0 00:02:20.585 SO libspdk_event_vmd.so.6.0 00:02:20.585 SO libspdk_event_fsdev.so.1.0 00:02:20.585 SYMLINK libspdk_event_iobuf.so 00:02:20.585 SYMLINK libspdk_event_vhost_blk.so 00:02:20.585 SYMLINK libspdk_event_keyring.so 00:02:20.585 SYMLINK libspdk_event_scheduler.so 00:02:20.585 SYMLINK libspdk_event_sock.so 00:02:20.586 SYMLINK libspdk_event_vmd.so 00:02:20.586 SYMLINK libspdk_event_fsdev.so 00:02:20.846 CC module/event/subsystems/accel/accel.o 00:02:21.107 LIB libspdk_event_accel.a 00:02:21.107 SO libspdk_event_accel.so.6.0 00:02:21.107 SYMLINK libspdk_event_accel.so 00:02:21.699 CC module/event/subsystems/bdev/bdev.o 00:02:21.699 LIB libspdk_event_bdev.a 00:02:21.699 SO libspdk_event_bdev.so.6.0 00:02:21.960 SYMLINK libspdk_event_bdev.so 00:02:22.221 CC module/event/subsystems/scsi/scsi.o 00:02:22.221 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:22.221 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:22.221 CC module/event/subsystems/ublk/ublk.o 00:02:22.221 CC module/event/subsystems/nbd/nbd.o 00:02:22.483 LIB libspdk_event_scsi.a 00:02:22.483 LIB libspdk_event_nbd.a 00:02:22.483 LIB libspdk_event_ublk.a 00:02:22.483 SO libspdk_event_scsi.so.6.0 00:02:22.483 SO libspdk_event_nbd.so.6.0 00:02:22.483 SO libspdk_event_ublk.so.3.0 00:02:22.483 LIB libspdk_event_nvmf.a 00:02:22.483 SYMLINK libspdk_event_scsi.so 00:02:22.483 SYMLINK libspdk_event_nbd.so 00:02:22.483 SYMLINK libspdk_event_ublk.so 00:02:22.483 SO libspdk_event_nvmf.so.6.0 00:02:22.483 SYMLINK libspdk_event_nvmf.so 00:02:22.743 CC module/event/subsystems/iscsi/iscsi.o 00:02:22.743 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:23.005 LIB libspdk_event_vhost_scsi.a 00:02:23.005 LIB libspdk_event_iscsi.a 00:02:23.005 SO libspdk_event_vhost_scsi.so.3.0 00:02:23.005 SO libspdk_event_iscsi.so.6.0 00:02:23.005 SYMLINK libspdk_event_vhost_scsi.so 00:02:23.266 SYMLINK libspdk_event_iscsi.so 00:02:23.266 SO libspdk.so.6.0 00:02:23.266 SYMLINK libspdk.so 00:02:23.839 CXX app/trace/trace.o 00:02:23.839 CC app/trace_record/trace_record.o 00:02:23.839 CC app/spdk_lspci/spdk_lspci.o 00:02:23.839 CC app/spdk_nvme_identify/identify.o 00:02:23.839 TEST_HEADER include/spdk/accel.h 00:02:23.839 CC test/rpc_client/rpc_client_test.o 00:02:23.839 TEST_HEADER include/spdk/accel_module.h 00:02:23.839 TEST_HEADER include/spdk/assert.h 00:02:23.839 TEST_HEADER include/spdk/barrier.h 00:02:23.839 CC app/spdk_top/spdk_top.o 00:02:23.839 TEST_HEADER include/spdk/base64.h 00:02:23.839 TEST_HEADER include/spdk/bdev_module.h 00:02:23.839 TEST_HEADER include/spdk/bdev.h 00:02:23.839 CC app/spdk_nvme_discover/discovery_aer.o 00:02:23.839 TEST_HEADER include/spdk/bdev_zone.h 00:02:23.839 TEST_HEADER include/spdk/bit_array.h 00:02:23.839 TEST_HEADER include/spdk/bit_pool.h 00:02:23.839 TEST_HEADER include/spdk/blob_bdev.h 00:02:23.839 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:23.839 CC app/spdk_nvme_perf/perf.o 00:02:23.839 TEST_HEADER include/spdk/blobfs.h 00:02:23.839 TEST_HEADER include/spdk/conf.h 00:02:23.839 TEST_HEADER include/spdk/blob.h 00:02:23.839 TEST_HEADER include/spdk/config.h 00:02:23.839 TEST_HEADER include/spdk/cpuset.h 00:02:23.839 TEST_HEADER include/spdk/crc16.h 00:02:23.839 TEST_HEADER include/spdk/crc32.h 00:02:23.839 TEST_HEADER include/spdk/crc64.h 00:02:23.839 TEST_HEADER include/spdk/dif.h 00:02:23.839 TEST_HEADER include/spdk/dma.h 00:02:23.839 TEST_HEADER include/spdk/endian.h 00:02:23.839 TEST_HEADER include/spdk/env_dpdk.h 00:02:23.839 TEST_HEADER include/spdk/env.h 00:02:23.839 TEST_HEADER include/spdk/event.h 00:02:23.839 TEST_HEADER include/spdk/fd.h 00:02:23.839 TEST_HEADER include/spdk/fd_group.h 00:02:23.839 TEST_HEADER include/spdk/file.h 00:02:23.839 TEST_HEADER include/spdk/fsdev_module.h 00:02:23.839 TEST_HEADER include/spdk/fsdev.h 00:02:23.839 TEST_HEADER include/spdk/ftl.h 00:02:23.839 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:23.839 TEST_HEADER include/spdk/hexlify.h 00:02:23.839 TEST_HEADER include/spdk/gpt_spec.h 00:02:23.839 TEST_HEADER include/spdk/idxd.h 00:02:23.839 TEST_HEADER include/spdk/histogram_data.h 00:02:23.839 CC app/iscsi_tgt/iscsi_tgt.o 00:02:23.839 TEST_HEADER include/spdk/idxd_spec.h 00:02:23.839 TEST_HEADER include/spdk/init.h 00:02:23.839 TEST_HEADER include/spdk/ioat.h 00:02:23.839 TEST_HEADER include/spdk/json.h 00:02:23.839 TEST_HEADER include/spdk/ioat_spec.h 00:02:23.839 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:23.839 TEST_HEADER include/spdk/iscsi_spec.h 00:02:23.839 CC app/spdk_dd/spdk_dd.o 00:02:23.839 TEST_HEADER include/spdk/jsonrpc.h 00:02:23.839 TEST_HEADER include/spdk/keyring_module.h 00:02:23.839 TEST_HEADER include/spdk/keyring.h 00:02:23.839 TEST_HEADER include/spdk/likely.h 00:02:23.839 TEST_HEADER include/spdk/lvol.h 00:02:23.839 TEST_HEADER include/spdk/log.h 00:02:23.839 CC app/nvmf_tgt/nvmf_main.o 00:02:23.839 TEST_HEADER include/spdk/memory.h 00:02:23.839 TEST_HEADER include/spdk/md5.h 00:02:23.839 TEST_HEADER include/spdk/mmio.h 00:02:23.839 TEST_HEADER include/spdk/net.h 00:02:23.839 TEST_HEADER include/spdk/nbd.h 00:02:23.839 TEST_HEADER include/spdk/notify.h 00:02:23.839 TEST_HEADER include/spdk/nvme.h 00:02:23.839 TEST_HEADER include/spdk/nvme_intel.h 00:02:23.839 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:23.839 TEST_HEADER include/spdk/nvme_spec.h 00:02:23.839 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:23.839 TEST_HEADER include/spdk/nvme_zns.h 00:02:23.839 TEST_HEADER include/spdk/nvmf.h 00:02:23.839 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:23.839 TEST_HEADER include/spdk/nvmf_spec.h 00:02:23.839 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:23.839 TEST_HEADER include/spdk/opal.h 00:02:23.839 TEST_HEADER include/spdk/nvmf_transport.h 00:02:23.839 TEST_HEADER include/spdk/pci_ids.h 00:02:23.839 TEST_HEADER include/spdk/opal_spec.h 00:02:23.839 TEST_HEADER include/spdk/pipe.h 00:02:23.839 TEST_HEADER include/spdk/queue.h 00:02:23.840 TEST_HEADER include/spdk/reduce.h 00:02:23.840 TEST_HEADER include/spdk/scheduler.h 00:02:23.840 TEST_HEADER include/spdk/rpc.h 00:02:23.840 TEST_HEADER include/spdk/scsi.h 00:02:23.840 TEST_HEADER include/spdk/scsi_spec.h 00:02:23.840 TEST_HEADER include/spdk/sock.h 00:02:23.840 TEST_HEADER include/spdk/string.h 00:02:23.840 TEST_HEADER include/spdk/stdinc.h 00:02:23.840 TEST_HEADER include/spdk/thread.h 00:02:23.840 TEST_HEADER include/spdk/trace.h 00:02:23.840 TEST_HEADER include/spdk/ublk.h 00:02:23.840 TEST_HEADER include/spdk/trace_parser.h 00:02:23.840 TEST_HEADER include/spdk/util.h 00:02:23.840 TEST_HEADER include/spdk/tree.h 00:02:23.840 TEST_HEADER include/spdk/uuid.h 00:02:23.840 TEST_HEADER include/spdk/version.h 00:02:23.840 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:23.840 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:23.840 TEST_HEADER include/spdk/vmd.h 00:02:23.840 TEST_HEADER include/spdk/vhost.h 00:02:23.840 TEST_HEADER include/spdk/xor.h 00:02:23.840 CXX test/cpp_headers/accel.o 00:02:23.840 TEST_HEADER include/spdk/zipf.h 00:02:23.840 CXX test/cpp_headers/assert.o 00:02:23.840 CXX test/cpp_headers/accel_module.o 00:02:23.840 CXX test/cpp_headers/barrier.o 00:02:23.840 CXX test/cpp_headers/bdev_module.o 00:02:23.840 CXX test/cpp_headers/base64.o 00:02:23.840 CXX test/cpp_headers/bdev.o 00:02:23.840 CXX test/cpp_headers/bit_pool.o 00:02:23.840 CXX test/cpp_headers/bdev_zone.o 00:02:23.840 CXX test/cpp_headers/blob_bdev.o 00:02:23.840 CXX test/cpp_headers/bit_array.o 00:02:23.840 CC app/spdk_tgt/spdk_tgt.o 00:02:23.840 CXX test/cpp_headers/blobfs_bdev.o 00:02:23.840 CXX test/cpp_headers/blob.o 00:02:23.840 CXX test/cpp_headers/blobfs.o 00:02:23.840 CXX test/cpp_headers/conf.o 00:02:23.840 CXX test/cpp_headers/config.o 00:02:23.840 CXX test/cpp_headers/crc16.o 00:02:23.840 CXX test/cpp_headers/cpuset.o 00:02:23.840 CXX test/cpp_headers/crc32.o 00:02:23.840 CXX test/cpp_headers/dif.o 00:02:23.840 CXX test/cpp_headers/crc64.o 00:02:23.840 CXX test/cpp_headers/dma.o 00:02:23.840 CXX test/cpp_headers/endian.o 00:02:23.840 CXX test/cpp_headers/env.o 00:02:23.840 CXX test/cpp_headers/env_dpdk.o 00:02:23.840 CXX test/cpp_headers/fd.o 00:02:23.840 CXX test/cpp_headers/event.o 00:02:23.840 CXX test/cpp_headers/fd_group.o 00:02:23.840 CXX test/cpp_headers/file.o 00:02:23.840 CXX test/cpp_headers/fsdev.o 00:02:23.840 CXX test/cpp_headers/fsdev_module.o 00:02:23.840 CXX test/cpp_headers/ftl.o 00:02:23.840 CXX test/cpp_headers/fuse_dispatcher.o 00:02:23.840 CXX test/cpp_headers/hexlify.o 00:02:23.840 CXX test/cpp_headers/gpt_spec.o 00:02:23.840 CXX test/cpp_headers/idxd.o 00:02:23.840 CXX test/cpp_headers/histogram_data.o 00:02:23.840 CXX test/cpp_headers/ioat.o 00:02:23.840 CXX test/cpp_headers/idxd_spec.o 00:02:23.840 CXX test/cpp_headers/init.o 00:02:23.840 CXX test/cpp_headers/ioat_spec.o 00:02:23.840 CXX test/cpp_headers/iscsi_spec.o 00:02:23.840 CXX test/cpp_headers/json.o 00:02:23.840 CXX test/cpp_headers/keyring.o 00:02:23.840 CXX test/cpp_headers/keyring_module.o 00:02:23.840 CXX test/cpp_headers/jsonrpc.o 00:02:23.840 CXX test/cpp_headers/log.o 00:02:23.840 CXX test/cpp_headers/lvol.o 00:02:23.840 CXX test/cpp_headers/likely.o 00:02:23.840 CXX test/cpp_headers/memory.o 00:02:23.840 CXX test/cpp_headers/md5.o 00:02:23.840 CXX test/cpp_headers/mmio.o 00:02:23.840 CXX test/cpp_headers/nvme.o 00:02:23.840 CXX test/cpp_headers/notify.o 00:02:23.840 CXX test/cpp_headers/net.o 00:02:23.840 CXX test/cpp_headers/nbd.o 00:02:23.840 CXX test/cpp_headers/nvme_intel.o 00:02:23.840 CXX test/cpp_headers/nvme_spec.o 00:02:23.840 CXX test/cpp_headers/nvme_ocssd.o 00:02:23.840 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:23.840 CXX test/cpp_headers/nvme_zns.o 00:02:23.840 CXX test/cpp_headers/nvmf_cmd.o 00:02:23.840 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:23.840 CXX test/cpp_headers/nvmf.o 00:02:23.840 CXX test/cpp_headers/nvmf_transport.o 00:02:23.840 CXX test/cpp_headers/nvmf_spec.o 00:02:23.840 CXX test/cpp_headers/pci_ids.o 00:02:23.840 CXX test/cpp_headers/opal.o 00:02:23.840 CXX test/cpp_headers/opal_spec.o 00:02:23.840 CXX test/cpp_headers/pipe.o 00:02:23.840 CXX test/cpp_headers/reduce.o 00:02:23.840 CXX test/cpp_headers/rpc.o 00:02:23.840 CXX test/cpp_headers/queue.o 00:02:23.840 CXX test/cpp_headers/scheduler.o 00:02:23.840 CXX test/cpp_headers/sock.o 00:02:23.840 CXX test/cpp_headers/stdinc.o 00:02:23.840 CXX test/cpp_headers/scsi_spec.o 00:02:23.840 CXX test/cpp_headers/scsi.o 00:02:23.840 CXX test/cpp_headers/string.o 00:02:23.840 CC test/env/memory/memory_ut.o 00:02:23.840 CXX test/cpp_headers/thread.o 00:02:24.105 CXX test/cpp_headers/trace.o 00:02:24.105 CXX test/cpp_headers/ublk.o 00:02:24.105 CXX test/cpp_headers/tree.o 00:02:24.105 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:24.105 CXX test/cpp_headers/trace_parser.o 00:02:24.105 CXX test/cpp_headers/version.o 00:02:24.105 CXX test/cpp_headers/util.o 00:02:24.105 CXX test/cpp_headers/uuid.o 00:02:24.105 CXX test/cpp_headers/vfio_user_pci.o 00:02:24.105 CXX test/cpp_headers/vfio_user_spec.o 00:02:24.105 CXX test/cpp_headers/vhost.o 00:02:24.105 CC examples/util/zipf/zipf.o 00:02:24.105 CXX test/cpp_headers/vmd.o 00:02:24.105 CC examples/ioat/verify/verify.o 00:02:24.105 CXX test/cpp_headers/zipf.o 00:02:24.105 CC test/env/pci/pci_ut.o 00:02:24.105 CC test/env/vtophys/vtophys.o 00:02:24.105 CXX test/cpp_headers/xor.o 00:02:24.105 CC examples/ioat/perf/perf.o 00:02:24.105 LINK spdk_lspci 00:02:24.105 CC test/thread/poller_perf/poller_perf.o 00:02:24.105 CC test/app/jsoncat/jsoncat.o 00:02:24.105 CC app/fio/nvme/fio_plugin.o 00:02:24.105 CC test/app/histogram_perf/histogram_perf.o 00:02:24.105 CC test/dma/test_dma/test_dma.o 00:02:24.105 CC test/app/stub/stub.o 00:02:24.105 LINK rpc_client_test 00:02:24.105 CC test/app/bdev_svc/bdev_svc.o 00:02:24.105 CC app/fio/bdev/fio_plugin.o 00:02:24.105 LINK nvmf_tgt 00:02:24.105 LINK iscsi_tgt 00:02:24.365 LINK spdk_nvme_discover 00:02:24.365 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:24.365 LINK interrupt_tgt 00:02:24.365 CC test/env/mem_callbacks/mem_callbacks.o 00:02:24.365 LINK spdk_trace_record 00:02:24.365 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:24.365 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.365 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:24.365 LINK spdk_trace 00:02:24.624 LINK spdk_dd 00:02:24.624 LINK vtophys 00:02:24.624 LINK zipf 00:02:24.624 LINK env_dpdk_post_init 00:02:24.624 LINK poller_perf 00:02:24.624 LINK jsoncat 00:02:24.624 LINK spdk_tgt 00:02:24.624 LINK histogram_perf 00:02:24.624 LINK bdev_svc 00:02:24.625 LINK stub 00:02:24.625 LINK ioat_perf 00:02:24.625 LINK verify 00:02:24.885 CC app/vhost/vhost.o 00:02:24.885 LINK pci_ut 00:02:24.885 LINK nvme_fuzz 00:02:25.145 LINK test_dma 00:02:25.145 LINK spdk_bdev 00:02:25.145 CC examples/vmd/led/led.o 00:02:25.145 LINK mem_callbacks 00:02:25.145 CC examples/vmd/lsvmd/lsvmd.o 00:02:25.145 CC test/event/reactor/reactor.o 00:02:25.145 CC examples/sock/hello_world/hello_sock.o 00:02:25.145 LINK vhost 00:02:25.145 CC test/event/event_perf/event_perf.o 00:02:25.145 LINK vhost_fuzz 00:02:25.145 CC test/event/reactor_perf/reactor_perf.o 00:02:25.145 CC examples/idxd/perf/perf.o 00:02:25.145 CC test/event/app_repeat/app_repeat.o 00:02:25.145 CC examples/thread/thread/thread_ex.o 00:02:25.145 CC test/event/scheduler/scheduler.o 00:02:25.145 LINK spdk_nvme 00:02:25.145 LINK spdk_nvme_perf 00:02:25.145 LINK event_perf 00:02:25.145 LINK reactor_perf 00:02:25.145 LINK lsvmd 00:02:25.145 LINK led 00:02:25.405 LINK reactor 00:02:25.405 LINK spdk_nvme_identify 00:02:25.405 LINK app_repeat 00:02:25.405 LINK spdk_top 00:02:25.405 LINK hello_sock 00:02:25.405 LINK scheduler 00:02:25.405 LINK thread 00:02:25.405 LINK idxd_perf 00:02:25.665 CC test/nvme/startup/startup.o 00:02:25.665 CC test/nvme/sgl/sgl.o 00:02:25.665 CC test/nvme/reserve/reserve.o 00:02:25.665 CC test/nvme/connect_stress/connect_stress.o 00:02:25.665 CC test/nvme/fdp/fdp.o 00:02:25.665 CC test/nvme/reset/reset.o 00:02:25.665 CC test/nvme/boot_partition/boot_partition.o 00:02:25.665 CC test/nvme/e2edp/nvme_dp.o 00:02:25.665 CC test/nvme/simple_copy/simple_copy.o 00:02:25.665 CC test/nvme/fused_ordering/fused_ordering.o 00:02:25.665 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:25.665 CC test/nvme/err_injection/err_injection.o 00:02:25.665 CC test/nvme/cuse/cuse.o 00:02:25.665 CC test/nvme/aer/aer.o 00:02:25.665 CC test/nvme/overhead/overhead.o 00:02:25.665 CC test/nvme/compliance/nvme_compliance.o 00:02:25.666 CC test/accel/dif/dif.o 00:02:25.666 CC test/blobfs/mkfs/mkfs.o 00:02:25.666 LINK memory_ut 00:02:25.666 CC test/lvol/esnap/esnap.o 00:02:25.666 LINK startup 00:02:25.666 LINK boot_partition 00:02:25.929 LINK connect_stress 00:02:25.929 LINK err_injection 00:02:25.929 LINK reserve 00:02:25.929 LINK doorbell_aers 00:02:25.929 LINK fused_ordering 00:02:25.929 LINK simple_copy 00:02:25.929 LINK reset 00:02:25.929 LINK mkfs 00:02:25.929 LINK sgl 00:02:25.929 LINK nvme_dp 00:02:25.929 CC examples/nvme/hello_world/hello_world.o 00:02:25.929 CC examples/nvme/hotplug/hotplug.o 00:02:25.929 CC examples/nvme/reconnect/reconnect.o 00:02:25.929 CC examples/nvme/abort/abort.o 00:02:25.929 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:25.929 CC examples/nvme/arbitration/arbitration.o 00:02:25.929 LINK aer 00:02:25.929 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:25.929 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:25.929 LINK overhead 00:02:25.929 LINK fdp 00:02:25.929 LINK nvme_compliance 00:02:25.929 CC examples/accel/perf/accel_perf.o 00:02:25.929 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:25.929 CC examples/blob/cli/blobcli.o 00:02:25.929 CC examples/blob/hello_world/hello_blob.o 00:02:26.189 LINK pmr_persistence 00:02:26.189 LINK cmb_copy 00:02:26.189 LINK hello_world 00:02:26.189 LINK hotplug 00:02:26.189 LINK arbitration 00:02:26.189 LINK reconnect 00:02:26.189 LINK hello_blob 00:02:26.450 LINK abort 00:02:26.450 LINK hello_fsdev 00:02:26.450 LINK dif 00:02:26.450 LINK nvme_manage 00:02:26.450 LINK iscsi_fuzz 00:02:26.450 LINK accel_perf 00:02:26.450 LINK blobcli 00:02:27.021 LINK cuse 00:02:27.021 CC test/bdev/bdevio/bdevio.o 00:02:27.021 CC examples/bdev/hello_world/hello_bdev.o 00:02:27.282 CC examples/bdev/bdevperf/bdevperf.o 00:02:27.544 LINK hello_bdev 00:02:27.544 LINK bdevio 00:02:28.116 LINK bdevperf 00:02:28.689 CC examples/nvmf/nvmf/nvmf.o 00:02:28.950 LINK nvmf 00:02:31.493 LINK esnap 00:02:31.753 00:02:31.753 real 0m59.948s 00:02:31.753 user 8m15.299s 00:02:31.753 sys 4m18.437s 00:02:31.753 11:14:30 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:31.753 11:14:31 make -- common/autotest_common.sh@10 -- $ set +x 00:02:31.753 ************************************ 00:02:31.753 END TEST make 00:02:31.753 ************************************ 00:02:31.753 11:14:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:31.753 11:14:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:31.753 11:14:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:31.753 11:14:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.753 11:14:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:31.753 11:14:31 -- pm/common@44 -- $ pid=2155971 00:02:31.753 11:14:31 -- pm/common@50 -- $ kill -TERM 2155971 00:02:31.753 11:14:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.753 11:14:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:31.753 11:14:31 -- pm/common@44 -- $ pid=2155972 00:02:31.753 11:14:31 -- pm/common@50 -- $ kill -TERM 2155972 00:02:31.753 11:14:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.753 11:14:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:31.753 11:14:31 -- pm/common@44 -- $ pid=2155974 00:02:31.753 11:14:31 -- pm/common@50 -- $ kill -TERM 2155974 00:02:31.753 11:14:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.753 11:14:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:31.753 11:14:31 -- pm/common@44 -- $ pid=2155998 00:02:31.753 11:14:31 -- pm/common@50 -- $ sudo -E kill -TERM 2155998 00:02:31.753 11:14:31 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:31.753 11:14:31 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:32.015 11:14:31 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:32.015 11:14:31 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:32.015 11:14:31 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:32.015 11:14:31 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:32.015 11:14:31 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:32.015 11:14:31 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:32.015 11:14:31 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:32.015 11:14:31 -- scripts/common.sh@336 -- # IFS=.-: 00:02:32.015 11:14:31 -- scripts/common.sh@336 -- # read -ra ver1 00:02:32.015 11:14:31 -- scripts/common.sh@337 -- # IFS=.-: 00:02:32.015 11:14:31 -- scripts/common.sh@337 -- # read -ra ver2 00:02:32.015 11:14:31 -- scripts/common.sh@338 -- # local 'op=<' 00:02:32.015 11:14:31 -- scripts/common.sh@340 -- # ver1_l=2 00:02:32.015 11:14:31 -- scripts/common.sh@341 -- # ver2_l=1 00:02:32.015 11:14:31 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:32.015 11:14:31 -- scripts/common.sh@344 -- # case "$op" in 00:02:32.015 11:14:31 -- scripts/common.sh@345 -- # : 1 00:02:32.015 11:14:31 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:32.015 11:14:31 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:32.015 11:14:31 -- scripts/common.sh@365 -- # decimal 1 00:02:32.015 11:14:31 -- scripts/common.sh@353 -- # local d=1 00:02:32.015 11:14:31 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:32.015 11:14:31 -- scripts/common.sh@355 -- # echo 1 00:02:32.015 11:14:31 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:32.015 11:14:31 -- scripts/common.sh@366 -- # decimal 2 00:02:32.015 11:14:31 -- scripts/common.sh@353 -- # local d=2 00:02:32.015 11:14:31 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:32.015 11:14:31 -- scripts/common.sh@355 -- # echo 2 00:02:32.015 11:14:31 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:32.015 11:14:31 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:32.015 11:14:31 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:32.015 11:14:31 -- scripts/common.sh@368 -- # return 0 00:02:32.015 11:14:31 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:32.015 11:14:31 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:32.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:32.015 --rc genhtml_branch_coverage=1 00:02:32.015 --rc genhtml_function_coverage=1 00:02:32.015 --rc genhtml_legend=1 00:02:32.015 --rc geninfo_all_blocks=1 00:02:32.015 --rc geninfo_unexecuted_blocks=1 00:02:32.015 00:02:32.015 ' 00:02:32.015 11:14:31 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:32.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:32.015 --rc genhtml_branch_coverage=1 00:02:32.015 --rc genhtml_function_coverage=1 00:02:32.015 --rc genhtml_legend=1 00:02:32.015 --rc geninfo_all_blocks=1 00:02:32.015 --rc geninfo_unexecuted_blocks=1 00:02:32.015 00:02:32.015 ' 00:02:32.015 11:14:31 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:32.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:32.015 --rc genhtml_branch_coverage=1 00:02:32.015 --rc genhtml_function_coverage=1 00:02:32.015 --rc genhtml_legend=1 00:02:32.015 --rc geninfo_all_blocks=1 00:02:32.015 --rc geninfo_unexecuted_blocks=1 00:02:32.015 00:02:32.015 ' 00:02:32.015 11:14:31 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:32.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:32.015 --rc genhtml_branch_coverage=1 00:02:32.015 --rc genhtml_function_coverage=1 00:02:32.015 --rc genhtml_legend=1 00:02:32.015 --rc geninfo_all_blocks=1 00:02:32.015 --rc geninfo_unexecuted_blocks=1 00:02:32.015 00:02:32.015 ' 00:02:32.015 11:14:31 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:32.015 11:14:31 -- nvmf/common.sh@7 -- # uname -s 00:02:32.015 11:14:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:32.015 11:14:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:32.015 11:14:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:32.015 11:14:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:32.015 11:14:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:32.015 11:14:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:32.015 11:14:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:32.015 11:14:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:32.015 11:14:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:32.015 11:14:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:32.015 11:14:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:32.015 11:14:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:32.015 11:14:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:32.015 11:14:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:32.015 11:14:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:32.015 11:14:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:32.015 11:14:31 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:32.015 11:14:31 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:32.015 11:14:31 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:32.015 11:14:31 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:32.015 11:14:31 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:32.015 11:14:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.015 11:14:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.015 11:14:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.015 11:14:31 -- paths/export.sh@5 -- # export PATH 00:02:32.015 11:14:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.015 11:14:31 -- nvmf/common.sh@51 -- # : 0 00:02:32.015 11:14:31 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:32.015 11:14:31 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:32.015 11:14:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:32.015 11:14:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:32.015 11:14:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:32.015 11:14:31 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:32.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:32.015 11:14:31 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:32.015 11:14:31 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:32.015 11:14:31 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:32.015 11:14:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:32.015 11:14:31 -- spdk/autotest.sh@32 -- # uname -s 00:02:32.015 11:14:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:32.015 11:14:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:32.015 11:14:31 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:32.015 11:14:31 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:32.015 11:14:31 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:32.015 11:14:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:32.015 11:14:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:32.015 11:14:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:32.015 11:14:31 -- spdk/autotest.sh@48 -- # udevadm_pid=2221786 00:02:32.015 11:14:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:32.015 11:14:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:32.015 11:14:31 -- pm/common@17 -- # local monitor 00:02:32.015 11:14:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.015 11:14:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.015 11:14:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.015 11:14:31 -- pm/common@21 -- # date +%s 00:02:32.015 11:14:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.015 11:14:31 -- pm/common@21 -- # date +%s 00:02:32.015 11:14:31 -- pm/common@25 -- # sleep 1 00:02:32.015 11:14:31 -- pm/common@21 -- # date +%s 00:02:32.015 11:14:31 -- pm/common@21 -- # date +%s 00:02:32.015 11:14:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733566471 00:02:32.015 11:14:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733566471 00:02:32.015 11:14:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733566471 00:02:32.015 11:14:31 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733566471 00:02:32.277 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733566471_collect-vmstat.pm.log 00:02:32.277 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733566471_collect-cpu-load.pm.log 00:02:32.277 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733566471_collect-cpu-temp.pm.log 00:02:32.277 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733566471_collect-bmc-pm.bmc.pm.log 00:02:33.217 11:14:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:33.217 11:14:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:33.217 11:14:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:33.217 11:14:32 -- common/autotest_common.sh@10 -- # set +x 00:02:33.217 11:14:32 -- spdk/autotest.sh@59 -- # create_test_list 00:02:33.217 11:14:32 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:33.217 11:14:32 -- common/autotest_common.sh@10 -- # set +x 00:02:33.217 11:14:32 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:33.217 11:14:32 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.217 11:14:32 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.217 11:14:32 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:33.217 11:14:32 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.217 11:14:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:33.217 11:14:32 -- common/autotest_common.sh@1457 -- # uname 00:02:33.217 11:14:32 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:33.217 11:14:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:33.217 11:14:32 -- common/autotest_common.sh@1477 -- # uname 00:02:33.217 11:14:32 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:33.217 11:14:32 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:33.217 11:14:32 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:33.217 lcov: LCOV version 1.15 00:02:33.217 11:14:32 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:55.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:55.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:03.555 11:15:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:03.555 11:15:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:03.555 11:15:02 -- common/autotest_common.sh@10 -- # set +x 00:03:03.555 11:15:02 -- spdk/autotest.sh@78 -- # rm -f 00:03:03.555 11:15:02 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.859 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:06.859 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:06.859 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:07.121 11:15:06 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:07.121 11:15:06 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:07.121 11:15:06 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:07.121 11:15:06 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:07.121 11:15:06 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:07.121 11:15:06 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:07.121 11:15:06 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:07.121 11:15:06 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:03:07.121 11:15:06 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:07.121 11:15:06 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:07.121 11:15:06 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:07.121 11:15:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:07.121 11:15:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:07.121 11:15:06 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:07.121 11:15:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:07.121 11:15:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:07.121 11:15:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:07.121 11:15:06 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:07.121 11:15:06 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:07.121 No valid GPT data, bailing 00:03:07.121 11:15:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:07.121 11:15:06 -- scripts/common.sh@394 -- # pt= 00:03:07.121 11:15:06 -- scripts/common.sh@395 -- # return 1 00:03:07.121 11:15:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:07.121 1+0 records in 00:03:07.121 1+0 records out 00:03:07.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00395057 s, 265 MB/s 00:03:07.121 11:15:06 -- spdk/autotest.sh@105 -- # sync 00:03:07.121 11:15:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:07.121 11:15:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:07.121 11:15:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:17.122 11:15:14 -- spdk/autotest.sh@111 -- # uname -s 00:03:17.122 11:15:14 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:17.122 11:15:14 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:17.122 11:15:14 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:19.039 Hugepages 00:03:19.039 node hugesize free / total 00:03:19.039 node0 1048576kB 0 / 0 00:03:19.039 node0 2048kB 0 / 0 00:03:19.039 node1 1048576kB 0 / 0 00:03:19.039 node1 2048kB 0 / 0 00:03:19.039 00:03:19.039 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.039 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:19.039 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:19.039 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:19.039 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:19.039 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:19.039 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:19.039 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:19.039 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:19.039 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:19.039 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:19.039 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:19.039 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:19.039 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:19.039 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:19.039 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:19.039 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:19.039 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:19.039 11:15:18 -- spdk/autotest.sh@117 -- # uname -s 00:03:19.039 11:15:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:19.039 11:15:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:19.039 11:15:18 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.344 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:22.344 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:24.258 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:24.519 11:15:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:25.459 11:15:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:25.459 11:15:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:25.459 11:15:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:25.459 11:15:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:25.459 11:15:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:25.459 11:15:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:25.459 11:15:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:25.459 11:15:24 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:25.459 11:15:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:25.459 11:15:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:25.459 11:15:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:25.459 11:15:24 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.661 Waiting for block devices as requested 00:03:29.661 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:29.661 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:29.661 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:29.661 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:29.661 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:29.661 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:29.661 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:29.661 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:29.661 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:29.661 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:29.922 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:29.922 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:29.922 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:30.184 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:30.184 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:30.184 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:30.184 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:30.446 11:15:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:30.446 11:15:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:30.446 11:15:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:30.446 11:15:29 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:30.446 11:15:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:30.446 11:15:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:30.446 11:15:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:30.446 11:15:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:30.446 11:15:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:30.446 11:15:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:30.446 11:15:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:30.446 11:15:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:30.446 11:15:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:30.446 11:15:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:30.708 11:15:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:30.708 11:15:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:30.708 11:15:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:30.708 11:15:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:30.708 11:15:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:30.708 11:15:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:30.708 11:15:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:30.708 11:15:29 -- common/autotest_common.sh@1543 -- # continue 00:03:30.708 11:15:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:30.708 11:15:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:30.708 11:15:29 -- common/autotest_common.sh@10 -- # set +x 00:03:30.708 11:15:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:30.708 11:15:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:30.708 11:15:29 -- common/autotest_common.sh@10 -- # set +x 00:03:30.708 11:15:29 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.013 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:34.013 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:34.013 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:34.013 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:34.013 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:34.013 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:34.013 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:34.013 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:34.013 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:34.013 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:34.013 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:34.274 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:34.274 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:34.274 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:34.274 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:34.274 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:34.274 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:34.535 11:15:33 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:34.535 11:15:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:34.535 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:03:34.535 11:15:33 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:34.535 11:15:33 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:34.535 11:15:33 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:34.535 11:15:33 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:34.535 11:15:33 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:34.535 11:15:33 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:34.535 11:15:33 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:34.535 11:15:33 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:34.536 11:15:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:34.536 11:15:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:34.536 11:15:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:34.536 11:15:33 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:34.536 11:15:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:34.797 11:15:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:34.797 11:15:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:34.797 11:15:33 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:34.797 11:15:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:34.797 11:15:33 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:34.797 11:15:33 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:34.797 11:15:33 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:34.797 11:15:33 -- common/autotest_common.sh@1572 -- # return 0 00:03:34.797 11:15:33 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:34.797 11:15:33 -- common/autotest_common.sh@1580 -- # return 0 00:03:34.797 11:15:33 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:34.797 11:15:33 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:34.797 11:15:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:34.797 11:15:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:34.797 11:15:33 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:34.797 11:15:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.797 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:03:34.797 11:15:33 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:34.797 11:15:33 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:34.797 11:15:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.797 11:15:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.797 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:03:34.797 ************************************ 00:03:34.797 START TEST env 00:03:34.797 ************************************ 00:03:34.797 11:15:34 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:34.797 * Looking for test storage... 00:03:34.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:34.797 11:15:34 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:34.797 11:15:34 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:34.797 11:15:34 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:35.060 11:15:34 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:35.060 11:15:34 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:35.060 11:15:34 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:35.060 11:15:34 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:35.060 11:15:34 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.060 11:15:34 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:35.060 11:15:34 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:35.060 11:15:34 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:35.060 11:15:34 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:35.060 11:15:34 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:35.060 11:15:34 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:35.060 11:15:34 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:35.060 11:15:34 env -- scripts/common.sh@344 -- # case "$op" in 00:03:35.060 11:15:34 env -- scripts/common.sh@345 -- # : 1 00:03:35.060 11:15:34 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:35.060 11:15:34 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.060 11:15:34 env -- scripts/common.sh@365 -- # decimal 1 00:03:35.060 11:15:34 env -- scripts/common.sh@353 -- # local d=1 00:03:35.060 11:15:34 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.060 11:15:34 env -- scripts/common.sh@355 -- # echo 1 00:03:35.060 11:15:34 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:35.060 11:15:34 env -- scripts/common.sh@366 -- # decimal 2 00:03:35.060 11:15:34 env -- scripts/common.sh@353 -- # local d=2 00:03:35.060 11:15:34 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.060 11:15:34 env -- scripts/common.sh@355 -- # echo 2 00:03:35.060 11:15:34 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:35.060 11:15:34 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:35.060 11:15:34 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:35.060 11:15:34 env -- scripts/common.sh@368 -- # return 0 00:03:35.060 11:15:34 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.060 11:15:34 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:35.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.060 --rc genhtml_branch_coverage=1 00:03:35.060 --rc genhtml_function_coverage=1 00:03:35.060 --rc genhtml_legend=1 00:03:35.060 --rc geninfo_all_blocks=1 00:03:35.060 --rc geninfo_unexecuted_blocks=1 00:03:35.060 00:03:35.060 ' 00:03:35.060 11:15:34 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:35.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.060 --rc genhtml_branch_coverage=1 00:03:35.060 --rc genhtml_function_coverage=1 00:03:35.060 --rc genhtml_legend=1 00:03:35.060 --rc geninfo_all_blocks=1 00:03:35.060 --rc geninfo_unexecuted_blocks=1 00:03:35.060 00:03:35.060 ' 00:03:35.060 11:15:34 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:35.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.060 --rc genhtml_branch_coverage=1 00:03:35.060 --rc genhtml_function_coverage=1 00:03:35.060 --rc genhtml_legend=1 00:03:35.060 --rc geninfo_all_blocks=1 00:03:35.060 --rc geninfo_unexecuted_blocks=1 00:03:35.060 00:03:35.060 ' 00:03:35.060 11:15:34 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:35.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.060 --rc genhtml_branch_coverage=1 00:03:35.060 --rc genhtml_function_coverage=1 00:03:35.060 --rc genhtml_legend=1 00:03:35.060 --rc geninfo_all_blocks=1 00:03:35.060 --rc geninfo_unexecuted_blocks=1 00:03:35.060 00:03:35.060 ' 00:03:35.060 11:15:34 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:35.060 11:15:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.060 11:15:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.060 11:15:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.060 ************************************ 00:03:35.060 START TEST env_memory 00:03:35.060 ************************************ 00:03:35.060 11:15:34 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:35.060 00:03:35.060 00:03:35.060 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.060 http://cunit.sourceforge.net/ 00:03:35.060 00:03:35.060 00:03:35.060 Suite: memory 00:03:35.060 Test: alloc and free memory map ...[2024-12-07 11:15:34.317702] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:35.060 passed 00:03:35.060 Test: mem map translation ...[2024-12-07 11:15:34.346580] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:35.060 [2024-12-07 11:15:34.346614] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:35.060 [2024-12-07 11:15:34.346663] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:35.060 [2024-12-07 11:15:34.346676] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:35.060 passed 00:03:35.060 Test: mem map registration ...[2024-12-07 11:15:34.397370] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:35.060 [2024-12-07 11:15:34.397398] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:35.323 passed 00:03:35.323 Test: mem map adjacent registrations ...passed 00:03:35.323 00:03:35.323 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.323 suites 1 1 n/a 0 0 00:03:35.323 tests 4 4 4 0 0 00:03:35.323 asserts 152 152 152 0 n/a 00:03:35.323 00:03:35.323 Elapsed time = 0.173 seconds 00:03:35.323 00:03:35.323 real 0m0.230s 00:03:35.323 user 0m0.181s 00:03:35.323 sys 0m0.045s 00:03:35.323 11:15:34 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.323 11:15:34 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:35.323 ************************************ 00:03:35.323 END TEST env_memory 00:03:35.323 ************************************ 00:03:35.323 11:15:34 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:35.323 11:15:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.323 11:15:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.323 11:15:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.323 ************************************ 00:03:35.323 START TEST env_vtophys 00:03:35.323 ************************************ 00:03:35.323 11:15:34 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:35.323 EAL: lib.eal log level changed from notice to debug 00:03:35.323 EAL: Detected lcore 0 as core 0 on socket 0 00:03:35.323 EAL: Detected lcore 1 as core 1 on socket 0 00:03:35.323 EAL: Detected lcore 2 as core 2 on socket 0 00:03:35.323 EAL: Detected lcore 3 as core 3 on socket 0 00:03:35.323 EAL: Detected lcore 4 as core 4 on socket 0 00:03:35.323 EAL: Detected lcore 5 as core 5 on socket 0 00:03:35.323 EAL: Detected lcore 6 as core 6 on socket 0 00:03:35.323 EAL: Detected lcore 7 as core 7 on socket 0 00:03:35.323 EAL: Detected lcore 8 as core 8 on socket 0 00:03:35.323 EAL: Detected lcore 9 as core 9 on socket 0 00:03:35.323 EAL: Detected lcore 10 as core 10 on socket 0 00:03:35.323 EAL: Detected lcore 11 as core 11 on socket 0 00:03:35.323 EAL: Detected lcore 12 as core 12 on socket 0 00:03:35.323 EAL: Detected lcore 13 as core 13 on socket 0 00:03:35.323 EAL: Detected lcore 14 as core 14 on socket 0 00:03:35.323 EAL: Detected lcore 15 as core 15 on socket 0 00:03:35.323 EAL: Detected lcore 16 as core 16 on socket 0 00:03:35.323 EAL: Detected lcore 17 as core 17 on socket 0 00:03:35.323 EAL: Detected lcore 18 as core 18 on socket 0 00:03:35.323 EAL: Detected lcore 19 as core 19 on socket 0 00:03:35.323 EAL: Detected lcore 20 as core 20 on socket 0 00:03:35.323 EAL: Detected lcore 21 as core 21 on socket 0 00:03:35.323 EAL: Detected lcore 22 as core 22 on socket 0 00:03:35.323 EAL: Detected lcore 23 as core 23 on socket 0 00:03:35.323 EAL: Detected lcore 24 as core 24 on socket 0 00:03:35.323 EAL: Detected lcore 25 as core 25 on socket 0 00:03:35.323 EAL: Detected lcore 26 as core 26 on socket 0 00:03:35.323 EAL: Detected lcore 27 as core 27 on socket 0 00:03:35.323 EAL: Detected lcore 28 as core 28 on socket 0 00:03:35.323 EAL: Detected lcore 29 as core 29 on socket 0 00:03:35.323 EAL: Detected lcore 30 as core 30 on socket 0 00:03:35.323 EAL: Detected lcore 31 as core 31 on socket 0 00:03:35.323 EAL: Detected lcore 32 as core 32 on socket 0 00:03:35.323 EAL: Detected lcore 33 as core 33 on socket 0 00:03:35.323 EAL: Detected lcore 34 as core 34 on socket 0 00:03:35.323 EAL: Detected lcore 35 as core 35 on socket 0 00:03:35.323 EAL: Detected lcore 36 as core 0 on socket 1 00:03:35.323 EAL: Detected lcore 37 as core 1 on socket 1 00:03:35.323 EAL: Detected lcore 38 as core 2 on socket 1 00:03:35.323 EAL: Detected lcore 39 as core 3 on socket 1 00:03:35.323 EAL: Detected lcore 40 as core 4 on socket 1 00:03:35.323 EAL: Detected lcore 41 as core 5 on socket 1 00:03:35.323 EAL: Detected lcore 42 as core 6 on socket 1 00:03:35.323 EAL: Detected lcore 43 as core 7 on socket 1 00:03:35.323 EAL: Detected lcore 44 as core 8 on socket 1 00:03:35.323 EAL: Detected lcore 45 as core 9 on socket 1 00:03:35.323 EAL: Detected lcore 46 as core 10 on socket 1 00:03:35.323 EAL: Detected lcore 47 as core 11 on socket 1 00:03:35.323 EAL: Detected lcore 48 as core 12 on socket 1 00:03:35.323 EAL: Detected lcore 49 as core 13 on socket 1 00:03:35.323 EAL: Detected lcore 50 as core 14 on socket 1 00:03:35.323 EAL: Detected lcore 51 as core 15 on socket 1 00:03:35.323 EAL: Detected lcore 52 as core 16 on socket 1 00:03:35.323 EAL: Detected lcore 53 as core 17 on socket 1 00:03:35.323 EAL: Detected lcore 54 as core 18 on socket 1 00:03:35.323 EAL: Detected lcore 55 as core 19 on socket 1 00:03:35.323 EAL: Detected lcore 56 as core 20 on socket 1 00:03:35.323 EAL: Detected lcore 57 as core 21 on socket 1 00:03:35.323 EAL: Detected lcore 58 as core 22 on socket 1 00:03:35.323 EAL: Detected lcore 59 as core 23 on socket 1 00:03:35.323 EAL: Detected lcore 60 as core 24 on socket 1 00:03:35.323 EAL: Detected lcore 61 as core 25 on socket 1 00:03:35.323 EAL: Detected lcore 62 as core 26 on socket 1 00:03:35.323 EAL: Detected lcore 63 as core 27 on socket 1 00:03:35.323 EAL: Detected lcore 64 as core 28 on socket 1 00:03:35.323 EAL: Detected lcore 65 as core 29 on socket 1 00:03:35.323 EAL: Detected lcore 66 as core 30 on socket 1 00:03:35.323 EAL: Detected lcore 67 as core 31 on socket 1 00:03:35.323 EAL: Detected lcore 68 as core 32 on socket 1 00:03:35.323 EAL: Detected lcore 69 as core 33 on socket 1 00:03:35.323 EAL: Detected lcore 70 as core 34 on socket 1 00:03:35.323 EAL: Detected lcore 71 as core 35 on socket 1 00:03:35.323 EAL: Detected lcore 72 as core 0 on socket 0 00:03:35.323 EAL: Detected lcore 73 as core 1 on socket 0 00:03:35.323 EAL: Detected lcore 74 as core 2 on socket 0 00:03:35.323 EAL: Detected lcore 75 as core 3 on socket 0 00:03:35.323 EAL: Detected lcore 76 as core 4 on socket 0 00:03:35.323 EAL: Detected lcore 77 as core 5 on socket 0 00:03:35.323 EAL: Detected lcore 78 as core 6 on socket 0 00:03:35.323 EAL: Detected lcore 79 as core 7 on socket 0 00:03:35.323 EAL: Detected lcore 80 as core 8 on socket 0 00:03:35.323 EAL: Detected lcore 81 as core 9 on socket 0 00:03:35.323 EAL: Detected lcore 82 as core 10 on socket 0 00:03:35.323 EAL: Detected lcore 83 as core 11 on socket 0 00:03:35.323 EAL: Detected lcore 84 as core 12 on socket 0 00:03:35.323 EAL: Detected lcore 85 as core 13 on socket 0 00:03:35.323 EAL: Detected lcore 86 as core 14 on socket 0 00:03:35.323 EAL: Detected lcore 87 as core 15 on socket 0 00:03:35.323 EAL: Detected lcore 88 as core 16 on socket 0 00:03:35.323 EAL: Detected lcore 89 as core 17 on socket 0 00:03:35.323 EAL: Detected lcore 90 as core 18 on socket 0 00:03:35.323 EAL: Detected lcore 91 as core 19 on socket 0 00:03:35.323 EAL: Detected lcore 92 as core 20 on socket 0 00:03:35.323 EAL: Detected lcore 93 as core 21 on socket 0 00:03:35.323 EAL: Detected lcore 94 as core 22 on socket 0 00:03:35.323 EAL: Detected lcore 95 as core 23 on socket 0 00:03:35.323 EAL: Detected lcore 96 as core 24 on socket 0 00:03:35.323 EAL: Detected lcore 97 as core 25 on socket 0 00:03:35.323 EAL: Detected lcore 98 as core 26 on socket 0 00:03:35.324 EAL: Detected lcore 99 as core 27 on socket 0 00:03:35.324 EAL: Detected lcore 100 as core 28 on socket 0 00:03:35.324 EAL: Detected lcore 101 as core 29 on socket 0 00:03:35.324 EAL: Detected lcore 102 as core 30 on socket 0 00:03:35.324 EAL: Detected lcore 103 as core 31 on socket 0 00:03:35.324 EAL: Detected lcore 104 as core 32 on socket 0 00:03:35.324 EAL: Detected lcore 105 as core 33 on socket 0 00:03:35.324 EAL: Detected lcore 106 as core 34 on socket 0 00:03:35.324 EAL: Detected lcore 107 as core 35 on socket 0 00:03:35.324 EAL: Detected lcore 108 as core 0 on socket 1 00:03:35.324 EAL: Detected lcore 109 as core 1 on socket 1 00:03:35.324 EAL: Detected lcore 110 as core 2 on socket 1 00:03:35.324 EAL: Detected lcore 111 as core 3 on socket 1 00:03:35.324 EAL: Detected lcore 112 as core 4 on socket 1 00:03:35.324 EAL: Detected lcore 113 as core 5 on socket 1 00:03:35.324 EAL: Detected lcore 114 as core 6 on socket 1 00:03:35.324 EAL: Detected lcore 115 as core 7 on socket 1 00:03:35.324 EAL: Detected lcore 116 as core 8 on socket 1 00:03:35.324 EAL: Detected lcore 117 as core 9 on socket 1 00:03:35.324 EAL: Detected lcore 118 as core 10 on socket 1 00:03:35.324 EAL: Detected lcore 119 as core 11 on socket 1 00:03:35.324 EAL: Detected lcore 120 as core 12 on socket 1 00:03:35.324 EAL: Detected lcore 121 as core 13 on socket 1 00:03:35.324 EAL: Detected lcore 122 as core 14 on socket 1 00:03:35.324 EAL: Detected lcore 123 as core 15 on socket 1 00:03:35.324 EAL: Detected lcore 124 as core 16 on socket 1 00:03:35.324 EAL: Detected lcore 125 as core 17 on socket 1 00:03:35.324 EAL: Detected lcore 126 as core 18 on socket 1 00:03:35.324 EAL: Detected lcore 127 as core 19 on socket 1 00:03:35.324 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:35.324 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:35.324 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:35.324 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:35.324 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:35.324 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:35.324 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:35.324 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:35.324 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:35.324 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:35.324 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:35.324 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:35.324 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:35.324 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:35.324 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:35.324 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:35.324 EAL: Maximum logical cores by configuration: 128 00:03:35.324 EAL: Detected CPU lcores: 128 00:03:35.324 EAL: Detected NUMA nodes: 2 00:03:35.324 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:35.324 EAL: Detected shared linkage of DPDK 00:03:35.324 EAL: No shared files mode enabled, IPC will be disabled 00:03:35.324 EAL: Bus pci wants IOVA as 'DC' 00:03:35.324 EAL: Buses did not request a specific IOVA mode. 00:03:35.324 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:35.324 EAL: Selected IOVA mode 'VA' 00:03:35.324 EAL: Probing VFIO support... 00:03:35.324 EAL: IOMMU type 1 (Type 1) is supported 00:03:35.324 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:35.324 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:35.324 EAL: VFIO support initialized 00:03:35.324 EAL: Ask a virtual area of 0x2e000 bytes 00:03:35.324 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:35.324 EAL: Setting up physically contiguous memory... 00:03:35.324 EAL: Setting maximum number of open files to 524288 00:03:35.324 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:35.324 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:35.324 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:35.324 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.324 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:35.324 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.324 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.324 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:35.324 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:35.324 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.324 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:35.324 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.324 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.324 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:35.324 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:35.324 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.324 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:35.324 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.324 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.324 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:35.324 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:35.324 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.324 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:35.324 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.324 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.324 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:35.324 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:35.324 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:35.324 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.324 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:35.324 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:35.324 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.324 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:35.324 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:35.324 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.324 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:35.324 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:35.324 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.324 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:35.324 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:35.324 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.324 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:35.324 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:35.324 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.324 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:35.324 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:35.324 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.324 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:35.324 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:35.324 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.324 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:35.324 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:35.324 EAL: Hugepages will be freed exactly as allocated. 00:03:35.324 EAL: No shared files mode enabled, IPC is disabled 00:03:35.324 EAL: No shared files mode enabled, IPC is disabled 00:03:35.324 EAL: TSC frequency is ~2400000 KHz 00:03:35.324 EAL: Main lcore 0 is ready (tid=7fb3173dea40;cpuset=[0]) 00:03:35.324 EAL: Trying to obtain current memory policy. 00:03:35.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.324 EAL: Restoring previous memory policy: 0 00:03:35.324 EAL: request: mp_malloc_sync 00:03:35.324 EAL: No shared files mode enabled, IPC is disabled 00:03:35.324 EAL: Heap on socket 0 was expanded by 2MB 00:03:35.324 EAL: No shared files mode enabled, IPC is disabled 00:03:35.585 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:35.586 EAL: Mem event callback 'spdk:(nil)' registered 00:03:35.586 00:03:35.586 00:03:35.586 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.586 http://cunit.sourceforge.net/ 00:03:35.586 00:03:35.586 00:03:35.586 Suite: components_suite 00:03:35.846 Test: vtophys_malloc_test ...passed 00:03:35.846 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:35.846 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.846 EAL: Restoring previous memory policy: 4 00:03:35.846 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.846 EAL: request: mp_malloc_sync 00:03:35.846 EAL: No shared files mode enabled, IPC is disabled 00:03:35.846 EAL: Heap on socket 0 was expanded by 4MB 00:03:35.846 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.846 EAL: request: mp_malloc_sync 00:03:35.846 EAL: No shared files mode enabled, IPC is disabled 00:03:35.846 EAL: Heap on socket 0 was shrunk by 4MB 00:03:35.846 EAL: Trying to obtain current memory policy. 00:03:35.846 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.846 EAL: Restoring previous memory policy: 4 00:03:35.846 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.846 EAL: request: mp_malloc_sync 00:03:35.846 EAL: No shared files mode enabled, IPC is disabled 00:03:35.846 EAL: Heap on socket 0 was expanded by 6MB 00:03:35.846 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.846 EAL: request: mp_malloc_sync 00:03:35.846 EAL: No shared files mode enabled, IPC is disabled 00:03:35.846 EAL: Heap on socket 0 was shrunk by 6MB 00:03:35.846 EAL: Trying to obtain current memory policy. 00:03:35.846 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.847 EAL: Restoring previous memory policy: 4 00:03:35.847 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.847 EAL: request: mp_malloc_sync 00:03:35.847 EAL: No shared files mode enabled, IPC is disabled 00:03:35.847 EAL: Heap on socket 0 was expanded by 10MB 00:03:35.847 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.847 EAL: request: mp_malloc_sync 00:03:35.847 EAL: No shared files mode enabled, IPC is disabled 00:03:35.847 EAL: Heap on socket 0 was shrunk by 10MB 00:03:35.847 EAL: Trying to obtain current memory policy. 00:03:35.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.847 EAL: Restoring previous memory policy: 4 00:03:35.847 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.847 EAL: request: mp_malloc_sync 00:03:35.847 EAL: No shared files mode enabled, IPC is disabled 00:03:35.847 EAL: Heap on socket 0 was expanded by 18MB 00:03:35.847 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.847 EAL: request: mp_malloc_sync 00:03:35.847 EAL: No shared files mode enabled, IPC is disabled 00:03:35.847 EAL: Heap on socket 0 was shrunk by 18MB 00:03:35.847 EAL: Trying to obtain current memory policy. 00:03:35.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.847 EAL: Restoring previous memory policy: 4 00:03:35.847 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.847 EAL: request: mp_malloc_sync 00:03:35.847 EAL: No shared files mode enabled, IPC is disabled 00:03:35.847 EAL: Heap on socket 0 was expanded by 34MB 00:03:35.847 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.847 EAL: request: mp_malloc_sync 00:03:35.847 EAL: No shared files mode enabled, IPC is disabled 00:03:35.847 EAL: Heap on socket 0 was shrunk by 34MB 00:03:35.847 EAL: Trying to obtain current memory policy. 00:03:35.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.107 EAL: Restoring previous memory policy: 4 00:03:36.107 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.107 EAL: request: mp_malloc_sync 00:03:36.107 EAL: No shared files mode enabled, IPC is disabled 00:03:36.107 EAL: Heap on socket 0 was expanded by 66MB 00:03:36.107 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.107 EAL: request: mp_malloc_sync 00:03:36.107 EAL: No shared files mode enabled, IPC is disabled 00:03:36.107 EAL: Heap on socket 0 was shrunk by 66MB 00:03:36.107 EAL: Trying to obtain current memory policy. 00:03:36.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.107 EAL: Restoring previous memory policy: 4 00:03:36.107 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.107 EAL: request: mp_malloc_sync 00:03:36.107 EAL: No shared files mode enabled, IPC is disabled 00:03:36.107 EAL: Heap on socket 0 was expanded by 130MB 00:03:36.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.367 EAL: request: mp_malloc_sync 00:03:36.367 EAL: No shared files mode enabled, IPC is disabled 00:03:36.367 EAL: Heap on socket 0 was shrunk by 130MB 00:03:36.367 EAL: Trying to obtain current memory policy. 00:03:36.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.367 EAL: Restoring previous memory policy: 4 00:03:36.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.367 EAL: request: mp_malloc_sync 00:03:36.367 EAL: No shared files mode enabled, IPC is disabled 00:03:36.367 EAL: Heap on socket 0 was expanded by 258MB 00:03:36.937 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.937 EAL: request: mp_malloc_sync 00:03:36.937 EAL: No shared files mode enabled, IPC is disabled 00:03:36.937 EAL: Heap on socket 0 was shrunk by 258MB 00:03:37.197 EAL: Trying to obtain current memory policy. 00:03:37.197 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:37.197 EAL: Restoring previous memory policy: 4 00:03:37.197 EAL: Calling mem event callback 'spdk:(nil)' 00:03:37.197 EAL: request: mp_malloc_sync 00:03:37.197 EAL: No shared files mode enabled, IPC is disabled 00:03:37.197 EAL: Heap on socket 0 was expanded by 514MB 00:03:37.768 EAL: Calling mem event callback 'spdk:(nil)' 00:03:37.768 EAL: request: mp_malloc_sync 00:03:37.768 EAL: No shared files mode enabled, IPC is disabled 00:03:37.768 EAL: Heap on socket 0 was shrunk by 514MB 00:03:38.339 EAL: Trying to obtain current memory policy. 00:03:38.339 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:38.599 EAL: Restoring previous memory policy: 4 00:03:38.599 EAL: Calling mem event callback 'spdk:(nil)' 00:03:38.599 EAL: request: mp_malloc_sync 00:03:38.599 EAL: No shared files mode enabled, IPC is disabled 00:03:38.599 EAL: Heap on socket 0 was expanded by 1026MB 00:03:39.979 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.979 EAL: request: mp_malloc_sync 00:03:39.979 EAL: No shared files mode enabled, IPC is disabled 00:03:39.979 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:40.919 passed 00:03:40.919 00:03:40.919 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.919 suites 1 1 n/a 0 0 00:03:40.919 tests 2 2 2 0 0 00:03:40.919 asserts 497 497 497 0 n/a 00:03:40.919 00:03:40.919 Elapsed time = 5.389 seconds 00:03:40.919 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.919 EAL: request: mp_malloc_sync 00:03:40.919 EAL: No shared files mode enabled, IPC is disabled 00:03:40.919 EAL: Heap on socket 0 was shrunk by 2MB 00:03:40.919 EAL: No shared files mode enabled, IPC is disabled 00:03:40.919 EAL: No shared files mode enabled, IPC is disabled 00:03:40.919 EAL: No shared files mode enabled, IPC is disabled 00:03:40.919 00:03:40.919 real 0m5.640s 00:03:40.919 user 0m4.859s 00:03:40.919 sys 0m0.741s 00:03:40.919 11:15:40 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.919 11:15:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:40.919 ************************************ 00:03:40.919 END TEST env_vtophys 00:03:40.919 ************************************ 00:03:40.919 11:15:40 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:40.919 11:15:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.919 11:15:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.919 11:15:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.179 ************************************ 00:03:41.179 START TEST env_pci 00:03:41.179 ************************************ 00:03:41.179 11:15:40 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:41.179 00:03:41.179 00:03:41.179 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.179 http://cunit.sourceforge.net/ 00:03:41.179 00:03:41.179 00:03:41.179 Suite: pci 00:03:41.179 Test: pci_hook ...[2024-12-07 11:15:40.316472] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2242403 has claimed it 00:03:41.179 EAL: Cannot find device (10000:00:01.0) 00:03:41.179 EAL: Failed to attach device on primary process 00:03:41.179 passed 00:03:41.179 00:03:41.179 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.179 suites 1 1 n/a 0 0 00:03:41.179 tests 1 1 1 0 0 00:03:41.179 asserts 25 25 25 0 n/a 00:03:41.179 00:03:41.179 Elapsed time = 0.056 seconds 00:03:41.179 00:03:41.179 real 0m0.141s 00:03:41.179 user 0m0.057s 00:03:41.179 sys 0m0.083s 00:03:41.179 11:15:40 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.179 11:15:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:41.179 ************************************ 00:03:41.179 END TEST env_pci 00:03:41.179 ************************************ 00:03:41.179 11:15:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:41.179 11:15:40 env -- env/env.sh@15 -- # uname 00:03:41.179 11:15:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:41.179 11:15:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:41.179 11:15:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:41.179 11:15:40 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:41.179 11:15:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.179 11:15:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.179 ************************************ 00:03:41.179 START TEST env_dpdk_post_init 00:03:41.179 ************************************ 00:03:41.179 11:15:40 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:41.439 EAL: Detected CPU lcores: 128 00:03:41.439 EAL: Detected NUMA nodes: 2 00:03:41.439 EAL: Detected shared linkage of DPDK 00:03:41.439 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:41.439 EAL: Selected IOVA mode 'VA' 00:03:41.439 EAL: VFIO support initialized 00:03:41.439 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:41.439 EAL: Using IOMMU type 1 (Type 1) 00:03:41.700 EAL: Ignore mapping IO port bar(1) 00:03:41.700 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:41.959 EAL: Ignore mapping IO port bar(1) 00:03:41.959 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:42.218 EAL: Ignore mapping IO port bar(1) 00:03:42.218 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:42.218 EAL: Ignore mapping IO port bar(1) 00:03:42.478 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:42.478 EAL: Ignore mapping IO port bar(1) 00:03:42.738 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:42.738 EAL: Ignore mapping IO port bar(1) 00:03:42.999 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:42.999 EAL: Ignore mapping IO port bar(1) 00:03:42.999 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:43.259 EAL: Ignore mapping IO port bar(1) 00:03:43.259 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:43.518 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:43.779 EAL: Ignore mapping IO port bar(1) 00:03:43.779 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:43.779 EAL: Ignore mapping IO port bar(1) 00:03:44.040 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:44.040 EAL: Ignore mapping IO port bar(1) 00:03:44.300 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:44.300 EAL: Ignore mapping IO port bar(1) 00:03:44.561 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:44.561 EAL: Ignore mapping IO port bar(1) 00:03:44.561 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:44.822 EAL: Ignore mapping IO port bar(1) 00:03:44.823 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:45.083 EAL: Ignore mapping IO port bar(1) 00:03:45.083 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:45.344 EAL: Ignore mapping IO port bar(1) 00:03:45.344 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:45.344 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:45.344 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:45.344 Starting DPDK initialization... 00:03:45.344 Starting SPDK post initialization... 00:03:45.344 SPDK NVMe probe 00:03:45.344 Attaching to 0000:65:00.0 00:03:45.344 Attached to 0000:65:00.0 00:03:45.344 Cleaning up... 00:03:47.270 00:03:47.270 real 0m5.869s 00:03:47.270 user 0m0.158s 00:03:47.270 sys 0m0.258s 00:03:47.270 11:15:46 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.270 11:15:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:47.270 ************************************ 00:03:47.270 END TEST env_dpdk_post_init 00:03:47.270 ************************************ 00:03:47.270 11:15:46 env -- env/env.sh@26 -- # uname 00:03:47.270 11:15:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:47.270 11:15:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:47.270 11:15:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.270 11:15:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.270 11:15:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:47.270 ************************************ 00:03:47.270 START TEST env_mem_callbacks 00:03:47.270 ************************************ 00:03:47.270 11:15:46 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:47.270 EAL: Detected CPU lcores: 128 00:03:47.270 EAL: Detected NUMA nodes: 2 00:03:47.270 EAL: Detected shared linkage of DPDK 00:03:47.270 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:47.270 EAL: Selected IOVA mode 'VA' 00:03:47.270 EAL: VFIO support initialized 00:03:47.270 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:47.270 00:03:47.270 00:03:47.270 CUnit - A unit testing framework for C - Version 2.1-3 00:03:47.270 http://cunit.sourceforge.net/ 00:03:47.270 00:03:47.270 00:03:47.270 Suite: memory 00:03:47.270 Test: test ... 00:03:47.270 register 0x200000200000 2097152 00:03:47.270 malloc 3145728 00:03:47.270 register 0x200000400000 4194304 00:03:47.270 buf 0x2000004fffc0 len 3145728 PASSED 00:03:47.270 malloc 64 00:03:47.270 buf 0x2000004ffec0 len 64 PASSED 00:03:47.270 malloc 4194304 00:03:47.270 register 0x200000800000 6291456 00:03:47.270 buf 0x2000009fffc0 len 4194304 PASSED 00:03:47.270 free 0x2000004fffc0 3145728 00:03:47.270 free 0x2000004ffec0 64 00:03:47.270 unregister 0x200000400000 4194304 PASSED 00:03:47.270 free 0x2000009fffc0 4194304 00:03:47.270 unregister 0x200000800000 6291456 PASSED 00:03:47.270 malloc 8388608 00:03:47.270 register 0x200000400000 10485760 00:03:47.270 buf 0x2000005fffc0 len 8388608 PASSED 00:03:47.270 free 0x2000005fffc0 8388608 00:03:47.270 unregister 0x200000400000 10485760 PASSED 00:03:47.270 passed 00:03:47.270 00:03:47.270 Run Summary: Type Total Ran Passed Failed Inactive 00:03:47.270 suites 1 1 n/a 0 0 00:03:47.270 tests 1 1 1 0 0 00:03:47.270 asserts 15 15 15 0 n/a 00:03:47.270 00:03:47.270 Elapsed time = 0.047 seconds 00:03:47.531 00:03:47.531 real 0m0.173s 00:03:47.531 user 0m0.088s 00:03:47.531 sys 0m0.084s 00:03:47.531 11:15:46 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.531 11:15:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:47.531 ************************************ 00:03:47.531 END TEST env_mem_callbacks 00:03:47.531 ************************************ 00:03:47.531 00:03:47.531 real 0m12.661s 00:03:47.531 user 0m5.607s 00:03:47.531 sys 0m1.589s 00:03:47.531 11:15:46 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.531 11:15:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:47.531 ************************************ 00:03:47.531 END TEST env 00:03:47.531 ************************************ 00:03:47.531 11:15:46 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:47.531 11:15:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.531 11:15:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.531 11:15:46 -- common/autotest_common.sh@10 -- # set +x 00:03:47.531 ************************************ 00:03:47.531 START TEST rpc 00:03:47.531 ************************************ 00:03:47.531 11:15:46 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:47.531 * Looking for test storage... 00:03:47.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:47.531 11:15:46 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:47.531 11:15:46 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:47.531 11:15:46 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:47.792 11:15:46 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:47.792 11:15:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:47.792 11:15:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:47.792 11:15:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:47.792 11:15:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:47.793 11:15:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:47.793 11:15:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:47.793 11:15:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:47.793 11:15:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:47.793 11:15:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:47.793 11:15:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:47.793 11:15:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:47.793 11:15:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:47.793 11:15:46 rpc -- scripts/common.sh@345 -- # : 1 00:03:47.793 11:15:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:47.793 11:15:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:47.793 11:15:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:47.793 11:15:46 rpc -- scripts/common.sh@353 -- # local d=1 00:03:47.793 11:15:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:47.793 11:15:46 rpc -- scripts/common.sh@355 -- # echo 1 00:03:47.793 11:15:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:47.793 11:15:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:47.793 11:15:46 rpc -- scripts/common.sh@353 -- # local d=2 00:03:47.793 11:15:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:47.793 11:15:46 rpc -- scripts/common.sh@355 -- # echo 2 00:03:47.793 11:15:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:47.793 11:15:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:47.793 11:15:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:47.793 11:15:46 rpc -- scripts/common.sh@368 -- # return 0 00:03:47.793 11:15:46 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:47.793 11:15:46 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:47.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.793 --rc genhtml_branch_coverage=1 00:03:47.793 --rc genhtml_function_coverage=1 00:03:47.793 --rc genhtml_legend=1 00:03:47.793 --rc geninfo_all_blocks=1 00:03:47.793 --rc geninfo_unexecuted_blocks=1 00:03:47.793 00:03:47.793 ' 00:03:47.793 11:15:46 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:47.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.793 --rc genhtml_branch_coverage=1 00:03:47.793 --rc genhtml_function_coverage=1 00:03:47.793 --rc genhtml_legend=1 00:03:47.793 --rc geninfo_all_blocks=1 00:03:47.793 --rc geninfo_unexecuted_blocks=1 00:03:47.793 00:03:47.793 ' 00:03:47.793 11:15:46 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:47.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.793 --rc genhtml_branch_coverage=1 00:03:47.793 --rc genhtml_function_coverage=1 00:03:47.793 --rc genhtml_legend=1 00:03:47.793 --rc geninfo_all_blocks=1 00:03:47.793 --rc geninfo_unexecuted_blocks=1 00:03:47.793 00:03:47.793 ' 00:03:47.793 11:15:46 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:47.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.793 --rc genhtml_branch_coverage=1 00:03:47.793 --rc genhtml_function_coverage=1 00:03:47.793 --rc genhtml_legend=1 00:03:47.793 --rc geninfo_all_blocks=1 00:03:47.793 --rc geninfo_unexecuted_blocks=1 00:03:47.793 00:03:47.793 ' 00:03:47.793 11:15:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2243867 00:03:47.793 11:15:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:47.793 11:15:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2243867 00:03:47.793 11:15:46 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:47.793 11:15:46 rpc -- common/autotest_common.sh@835 -- # '[' -z 2243867 ']' 00:03:47.793 11:15:46 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.793 11:15:46 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:47.793 11:15:46 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.793 11:15:46 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:47.793 11:15:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.793 [2024-12-07 11:15:47.036724] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:03:47.793 [2024-12-07 11:15:47.036838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2243867 ] 00:03:48.053 [2024-12-07 11:15:47.167228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.053 [2024-12-07 11:15:47.262425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:48.053 [2024-12-07 11:15:47.262467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2243867' to capture a snapshot of events at runtime. 00:03:48.053 [2024-12-07 11:15:47.262482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:48.053 [2024-12-07 11:15:47.262492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:48.053 [2024-12-07 11:15:47.262505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2243867 for offline analysis/debug. 00:03:48.053 [2024-12-07 11:15:47.263690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.623 11:15:47 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:48.623 11:15:47 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:48.623 11:15:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:48.623 11:15:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:48.623 11:15:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:48.623 11:15:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:48.623 11:15:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.623 11:15:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.623 11:15:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.623 ************************************ 00:03:48.623 START TEST rpc_integrity 00:03:48.623 ************************************ 00:03:48.623 11:15:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:48.623 11:15:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:48.623 11:15:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.623 11:15:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.623 11:15:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.623 11:15:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:48.623 11:15:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:48.884 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:48.884 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:48.884 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.884 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.885 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.885 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:48.885 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:48.885 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.885 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.885 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.885 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:48.885 { 00:03:48.885 "name": "Malloc0", 00:03:48.885 "aliases": [ 00:03:48.885 "f341c049-66f8-42e9-9dad-08c5d9509cf5" 00:03:48.885 ], 00:03:48.885 "product_name": "Malloc disk", 00:03:48.885 "block_size": 512, 00:03:48.885 "num_blocks": 16384, 00:03:48.885 "uuid": "f341c049-66f8-42e9-9dad-08c5d9509cf5", 00:03:48.885 "assigned_rate_limits": { 00:03:48.885 "rw_ios_per_sec": 0, 00:03:48.885 "rw_mbytes_per_sec": 0, 00:03:48.885 "r_mbytes_per_sec": 0, 00:03:48.885 "w_mbytes_per_sec": 0 00:03:48.885 }, 00:03:48.885 "claimed": false, 00:03:48.885 "zoned": false, 00:03:48.885 "supported_io_types": { 00:03:48.885 "read": true, 00:03:48.885 "write": true, 00:03:48.885 "unmap": true, 00:03:48.885 "flush": true, 00:03:48.885 "reset": true, 00:03:48.885 "nvme_admin": false, 00:03:48.885 "nvme_io": false, 00:03:48.885 "nvme_io_md": false, 00:03:48.885 "write_zeroes": true, 00:03:48.885 "zcopy": true, 00:03:48.885 "get_zone_info": false, 00:03:48.885 "zone_management": false, 00:03:48.885 "zone_append": false, 00:03:48.885 "compare": false, 00:03:48.885 "compare_and_write": false, 00:03:48.885 "abort": true, 00:03:48.885 "seek_hole": false, 00:03:48.885 "seek_data": false, 00:03:48.885 "copy": true, 00:03:48.885 "nvme_iov_md": false 00:03:48.885 }, 00:03:48.885 "memory_domains": [ 00:03:48.885 { 00:03:48.885 "dma_device_id": "system", 00:03:48.885 "dma_device_type": 1 00:03:48.885 }, 00:03:48.885 { 00:03:48.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.885 "dma_device_type": 2 00:03:48.885 } 00:03:48.885 ], 00:03:48.885 "driver_specific": {} 00:03:48.885 } 00:03:48.885 ]' 00:03:48.885 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:48.885 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:48.885 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:48.885 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.885 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.885 [2024-12-07 11:15:48.092810] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:48.885 [2024-12-07 11:15:48.092875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:48.885 [2024-12-07 11:15:48.092900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001fe80 00:03:48.885 [2024-12-07 11:15:48.092911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:48.885 [2024-12-07 11:15:48.095214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:48.885 [2024-12-07 11:15:48.095242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:48.885 Passthru0 00:03:48.885 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.885 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:48.885 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.885 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.885 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.885 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:48.885 { 00:03:48.885 "name": "Malloc0", 00:03:48.885 "aliases": [ 00:03:48.885 "f341c049-66f8-42e9-9dad-08c5d9509cf5" 00:03:48.885 ], 00:03:48.885 "product_name": "Malloc disk", 00:03:48.885 "block_size": 512, 00:03:48.885 "num_blocks": 16384, 00:03:48.885 "uuid": "f341c049-66f8-42e9-9dad-08c5d9509cf5", 00:03:48.885 "assigned_rate_limits": { 00:03:48.885 "rw_ios_per_sec": 0, 00:03:48.885 "rw_mbytes_per_sec": 0, 00:03:48.885 "r_mbytes_per_sec": 0, 00:03:48.885 "w_mbytes_per_sec": 0 00:03:48.885 }, 00:03:48.885 "claimed": true, 00:03:48.885 "claim_type": "exclusive_write", 00:03:48.885 "zoned": false, 00:03:48.885 "supported_io_types": { 00:03:48.885 "read": true, 00:03:48.885 "write": true, 00:03:48.885 "unmap": true, 00:03:48.885 "flush": true, 00:03:48.885 "reset": true, 00:03:48.885 "nvme_admin": false, 00:03:48.885 "nvme_io": false, 00:03:48.885 "nvme_io_md": false, 00:03:48.885 "write_zeroes": true, 00:03:48.885 "zcopy": true, 00:03:48.885 "get_zone_info": false, 00:03:48.885 "zone_management": false, 00:03:48.885 "zone_append": false, 00:03:48.885 "compare": false, 00:03:48.885 "compare_and_write": false, 00:03:48.885 "abort": true, 00:03:48.885 "seek_hole": false, 00:03:48.885 "seek_data": false, 00:03:48.885 "copy": true, 00:03:48.885 "nvme_iov_md": false 00:03:48.885 }, 00:03:48.885 "memory_domains": [ 00:03:48.885 { 00:03:48.885 "dma_device_id": "system", 00:03:48.885 "dma_device_type": 1 00:03:48.885 }, 00:03:48.885 { 00:03:48.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.885 "dma_device_type": 2 00:03:48.885 } 00:03:48.885 ], 00:03:48.885 "driver_specific": {} 00:03:48.885 }, 00:03:48.885 { 00:03:48.885 "name": "Passthru0", 00:03:48.885 "aliases": [ 00:03:48.885 "f98d5428-62e6-533f-87b1-4feea82b116c" 00:03:48.885 ], 00:03:48.885 "product_name": "passthru", 00:03:48.885 "block_size": 512, 00:03:48.885 "num_blocks": 16384, 00:03:48.885 "uuid": "f98d5428-62e6-533f-87b1-4feea82b116c", 00:03:48.885 "assigned_rate_limits": { 00:03:48.885 "rw_ios_per_sec": 0, 00:03:48.885 "rw_mbytes_per_sec": 0, 00:03:48.885 "r_mbytes_per_sec": 0, 00:03:48.885 "w_mbytes_per_sec": 0 00:03:48.885 }, 00:03:48.885 "claimed": false, 00:03:48.885 "zoned": false, 00:03:48.885 "supported_io_types": { 00:03:48.885 "read": true, 00:03:48.885 "write": true, 00:03:48.885 "unmap": true, 00:03:48.885 "flush": true, 00:03:48.885 "reset": true, 00:03:48.885 "nvme_admin": false, 00:03:48.885 "nvme_io": false, 00:03:48.885 "nvme_io_md": false, 00:03:48.885 "write_zeroes": true, 00:03:48.885 "zcopy": true, 00:03:48.885 "get_zone_info": false, 00:03:48.885 "zone_management": false, 00:03:48.885 "zone_append": false, 00:03:48.885 "compare": false, 00:03:48.885 "compare_and_write": false, 00:03:48.885 "abort": true, 00:03:48.885 "seek_hole": false, 00:03:48.885 "seek_data": false, 00:03:48.885 "copy": true, 00:03:48.885 "nvme_iov_md": false 00:03:48.885 }, 00:03:48.885 "memory_domains": [ 00:03:48.885 { 00:03:48.885 "dma_device_id": "system", 00:03:48.885 "dma_device_type": 1 00:03:48.885 }, 00:03:48.885 { 00:03:48.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.885 "dma_device_type": 2 00:03:48.885 } 00:03:48.885 ], 00:03:48.885 "driver_specific": { 00:03:48.885 "passthru": { 00:03:48.885 "name": "Passthru0", 00:03:48.885 "base_bdev_name": "Malloc0" 00:03:48.885 } 00:03:48.885 } 00:03:48.885 } 00:03:48.885 ]' 00:03:48.885 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:48.885 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:48.886 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:48.886 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.886 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.886 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.886 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:48.886 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.886 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.886 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.886 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:48.886 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.886 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.886 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.886 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:48.886 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:49.146 11:15:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:49.146 00:03:49.146 real 0m0.317s 00:03:49.146 user 0m0.192s 00:03:49.146 sys 0m0.043s 00:03:49.146 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.146 11:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.146 ************************************ 00:03:49.146 END TEST rpc_integrity 00:03:49.146 ************************************ 00:03:49.146 11:15:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:49.146 11:15:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.146 11:15:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.146 11:15:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.146 ************************************ 00:03:49.146 START TEST rpc_plugins 00:03:49.146 ************************************ 00:03:49.146 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:49.146 11:15:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:49.146 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.146 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.147 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.147 11:15:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:49.147 11:15:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:49.147 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.147 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.147 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.147 11:15:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:49.147 { 00:03:49.147 "name": "Malloc1", 00:03:49.147 "aliases": [ 00:03:49.147 "51f3001d-cf4e-4991-844d-af0603543aa8" 00:03:49.147 ], 00:03:49.147 "product_name": "Malloc disk", 00:03:49.147 "block_size": 4096, 00:03:49.147 "num_blocks": 256, 00:03:49.147 "uuid": "51f3001d-cf4e-4991-844d-af0603543aa8", 00:03:49.147 "assigned_rate_limits": { 00:03:49.147 "rw_ios_per_sec": 0, 00:03:49.147 "rw_mbytes_per_sec": 0, 00:03:49.147 "r_mbytes_per_sec": 0, 00:03:49.147 "w_mbytes_per_sec": 0 00:03:49.147 }, 00:03:49.147 "claimed": false, 00:03:49.147 "zoned": false, 00:03:49.147 "supported_io_types": { 00:03:49.147 "read": true, 00:03:49.147 "write": true, 00:03:49.147 "unmap": true, 00:03:49.147 "flush": true, 00:03:49.147 "reset": true, 00:03:49.147 "nvme_admin": false, 00:03:49.147 "nvme_io": false, 00:03:49.147 "nvme_io_md": false, 00:03:49.147 "write_zeroes": true, 00:03:49.147 "zcopy": true, 00:03:49.147 "get_zone_info": false, 00:03:49.147 "zone_management": false, 00:03:49.147 "zone_append": false, 00:03:49.147 "compare": false, 00:03:49.147 "compare_and_write": false, 00:03:49.147 "abort": true, 00:03:49.147 "seek_hole": false, 00:03:49.147 "seek_data": false, 00:03:49.147 "copy": true, 00:03:49.147 "nvme_iov_md": false 00:03:49.147 }, 00:03:49.147 "memory_domains": [ 00:03:49.147 { 00:03:49.147 "dma_device_id": "system", 00:03:49.147 "dma_device_type": 1 00:03:49.147 }, 00:03:49.147 { 00:03:49.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.147 "dma_device_type": 2 00:03:49.147 } 00:03:49.147 ], 00:03:49.147 "driver_specific": {} 00:03:49.147 } 00:03:49.147 ]' 00:03:49.147 11:15:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:49.147 11:15:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:49.147 11:15:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:49.147 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.147 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.147 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.147 11:15:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:49.147 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.147 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.147 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.147 11:15:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:49.147 11:15:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:49.147 11:15:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:49.147 00:03:49.147 real 0m0.157s 00:03:49.147 user 0m0.099s 00:03:49.147 sys 0m0.021s 00:03:49.147 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.147 11:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.147 ************************************ 00:03:49.147 END TEST rpc_plugins 00:03:49.147 ************************************ 00:03:49.407 11:15:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:49.407 11:15:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.407 11:15:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.407 11:15:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.407 ************************************ 00:03:49.407 START TEST rpc_trace_cmd_test 00:03:49.407 ************************************ 00:03:49.407 11:15:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:49.407 11:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:49.407 11:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:49.407 11:15:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.407 11:15:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:49.407 11:15:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.407 11:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:49.407 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2243867", 00:03:49.407 "tpoint_group_mask": "0x8", 00:03:49.407 "iscsi_conn": { 00:03:49.407 "mask": "0x2", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "scsi": { 00:03:49.407 "mask": "0x4", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "bdev": { 00:03:49.407 "mask": "0x8", 00:03:49.407 "tpoint_mask": "0xffffffffffffffff" 00:03:49.407 }, 00:03:49.407 "nvmf_rdma": { 00:03:49.407 "mask": "0x10", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "nvmf_tcp": { 00:03:49.407 "mask": "0x20", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "ftl": { 00:03:49.407 "mask": "0x40", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "blobfs": { 00:03:49.407 "mask": "0x80", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "dsa": { 00:03:49.407 "mask": "0x200", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "thread": { 00:03:49.407 "mask": "0x400", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "nvme_pcie": { 00:03:49.407 "mask": "0x800", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "iaa": { 00:03:49.407 "mask": "0x1000", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "nvme_tcp": { 00:03:49.407 "mask": "0x2000", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "bdev_nvme": { 00:03:49.407 "mask": "0x4000", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "sock": { 00:03:49.407 "mask": "0x8000", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "blob": { 00:03:49.407 "mask": "0x10000", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "bdev_raid": { 00:03:49.407 "mask": "0x20000", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 }, 00:03:49.407 "scheduler": { 00:03:49.407 "mask": "0x40000", 00:03:49.407 "tpoint_mask": "0x0" 00:03:49.407 } 00:03:49.407 }' 00:03:49.407 11:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:49.407 11:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:49.407 11:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:49.407 11:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:49.407 11:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:49.407 11:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:49.408 11:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:49.408 11:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:49.408 11:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:49.669 11:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:49.669 00:03:49.669 real 0m0.232s 00:03:49.669 user 0m0.193s 00:03:49.669 sys 0m0.031s 00:03:49.669 11:15:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.669 11:15:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:49.669 ************************************ 00:03:49.669 END TEST rpc_trace_cmd_test 00:03:49.669 ************************************ 00:03:49.669 11:15:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:49.669 11:15:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:49.669 11:15:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:49.669 11:15:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.669 11:15:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.669 11:15:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.669 ************************************ 00:03:49.669 START TEST rpc_daemon_integrity 00:03:49.669 ************************************ 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:49.669 { 00:03:49.669 "name": "Malloc2", 00:03:49.669 "aliases": [ 00:03:49.669 "27c0406f-a6a5-4133-8561-e74074377d5f" 00:03:49.669 ], 00:03:49.669 "product_name": "Malloc disk", 00:03:49.669 "block_size": 512, 00:03:49.669 "num_blocks": 16384, 00:03:49.669 "uuid": "27c0406f-a6a5-4133-8561-e74074377d5f", 00:03:49.669 "assigned_rate_limits": { 00:03:49.669 "rw_ios_per_sec": 0, 00:03:49.669 "rw_mbytes_per_sec": 0, 00:03:49.669 "r_mbytes_per_sec": 0, 00:03:49.669 "w_mbytes_per_sec": 0 00:03:49.669 }, 00:03:49.669 "claimed": false, 00:03:49.669 "zoned": false, 00:03:49.669 "supported_io_types": { 00:03:49.669 "read": true, 00:03:49.669 "write": true, 00:03:49.669 "unmap": true, 00:03:49.669 "flush": true, 00:03:49.669 "reset": true, 00:03:49.669 "nvme_admin": false, 00:03:49.669 "nvme_io": false, 00:03:49.669 "nvme_io_md": false, 00:03:49.669 "write_zeroes": true, 00:03:49.669 "zcopy": true, 00:03:49.669 "get_zone_info": false, 00:03:49.669 "zone_management": false, 00:03:49.669 "zone_append": false, 00:03:49.669 "compare": false, 00:03:49.669 "compare_and_write": false, 00:03:49.669 "abort": true, 00:03:49.669 "seek_hole": false, 00:03:49.669 "seek_data": false, 00:03:49.669 "copy": true, 00:03:49.669 "nvme_iov_md": false 00:03:49.669 }, 00:03:49.669 "memory_domains": [ 00:03:49.669 { 00:03:49.669 "dma_device_id": "system", 00:03:49.669 "dma_device_type": 1 00:03:49.669 }, 00:03:49.669 { 00:03:49.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.669 "dma_device_type": 2 00:03:49.669 } 00:03:49.669 ], 00:03:49.669 "driver_specific": {} 00:03:49.669 } 00:03:49.669 ]' 00:03:49.669 11:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:49.669 11:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:49.669 11:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:49.669 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.670 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.670 [2024-12-07 11:15:49.019203] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:49.670 [2024-12-07 11:15:49.019251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:49.670 [2024-12-07 11:15:49.019273] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021080 00:03:49.670 [2024-12-07 11:15:49.019284] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:49.670 [2024-12-07 11:15:49.021541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:49.670 [2024-12-07 11:15:49.021568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:49.930 Passthru0 00:03:49.930 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.930 11:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:49.930 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.930 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.930 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.930 11:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:49.930 { 00:03:49.930 "name": "Malloc2", 00:03:49.930 "aliases": [ 00:03:49.930 "27c0406f-a6a5-4133-8561-e74074377d5f" 00:03:49.930 ], 00:03:49.930 "product_name": "Malloc disk", 00:03:49.930 "block_size": 512, 00:03:49.930 "num_blocks": 16384, 00:03:49.930 "uuid": "27c0406f-a6a5-4133-8561-e74074377d5f", 00:03:49.930 "assigned_rate_limits": { 00:03:49.930 "rw_ios_per_sec": 0, 00:03:49.930 "rw_mbytes_per_sec": 0, 00:03:49.930 "r_mbytes_per_sec": 0, 00:03:49.930 "w_mbytes_per_sec": 0 00:03:49.931 }, 00:03:49.931 "claimed": true, 00:03:49.931 "claim_type": "exclusive_write", 00:03:49.931 "zoned": false, 00:03:49.931 "supported_io_types": { 00:03:49.931 "read": true, 00:03:49.931 "write": true, 00:03:49.931 "unmap": true, 00:03:49.931 "flush": true, 00:03:49.931 "reset": true, 00:03:49.931 "nvme_admin": false, 00:03:49.931 "nvme_io": false, 00:03:49.931 "nvme_io_md": false, 00:03:49.931 "write_zeroes": true, 00:03:49.931 "zcopy": true, 00:03:49.931 "get_zone_info": false, 00:03:49.931 "zone_management": false, 00:03:49.931 "zone_append": false, 00:03:49.931 "compare": false, 00:03:49.931 "compare_and_write": false, 00:03:49.931 "abort": true, 00:03:49.931 "seek_hole": false, 00:03:49.931 "seek_data": false, 00:03:49.931 "copy": true, 00:03:49.931 "nvme_iov_md": false 00:03:49.931 }, 00:03:49.931 "memory_domains": [ 00:03:49.931 { 00:03:49.931 "dma_device_id": "system", 00:03:49.931 "dma_device_type": 1 00:03:49.931 }, 00:03:49.931 { 00:03:49.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.931 "dma_device_type": 2 00:03:49.931 } 00:03:49.931 ], 00:03:49.931 "driver_specific": {} 00:03:49.931 }, 00:03:49.931 { 00:03:49.931 "name": "Passthru0", 00:03:49.931 "aliases": [ 00:03:49.931 "5ae99292-934d-5300-9faf-ca8d56fd6eea" 00:03:49.931 ], 00:03:49.931 "product_name": "passthru", 00:03:49.931 "block_size": 512, 00:03:49.931 "num_blocks": 16384, 00:03:49.931 "uuid": "5ae99292-934d-5300-9faf-ca8d56fd6eea", 00:03:49.931 "assigned_rate_limits": { 00:03:49.931 "rw_ios_per_sec": 0, 00:03:49.931 "rw_mbytes_per_sec": 0, 00:03:49.931 "r_mbytes_per_sec": 0, 00:03:49.931 "w_mbytes_per_sec": 0 00:03:49.931 }, 00:03:49.931 "claimed": false, 00:03:49.931 "zoned": false, 00:03:49.931 "supported_io_types": { 00:03:49.931 "read": true, 00:03:49.931 "write": true, 00:03:49.931 "unmap": true, 00:03:49.931 "flush": true, 00:03:49.931 "reset": true, 00:03:49.931 "nvme_admin": false, 00:03:49.931 "nvme_io": false, 00:03:49.931 "nvme_io_md": false, 00:03:49.931 "write_zeroes": true, 00:03:49.931 "zcopy": true, 00:03:49.931 "get_zone_info": false, 00:03:49.931 "zone_management": false, 00:03:49.931 "zone_append": false, 00:03:49.931 "compare": false, 00:03:49.931 "compare_and_write": false, 00:03:49.931 "abort": true, 00:03:49.931 "seek_hole": false, 00:03:49.931 "seek_data": false, 00:03:49.931 "copy": true, 00:03:49.931 "nvme_iov_md": false 00:03:49.931 }, 00:03:49.931 "memory_domains": [ 00:03:49.931 { 00:03:49.931 "dma_device_id": "system", 00:03:49.931 "dma_device_type": 1 00:03:49.931 }, 00:03:49.931 { 00:03:49.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.931 "dma_device_type": 2 00:03:49.931 } 00:03:49.931 ], 00:03:49.931 "driver_specific": { 00:03:49.931 "passthru": { 00:03:49.931 "name": "Passthru0", 00:03:49.931 "base_bdev_name": "Malloc2" 00:03:49.931 } 00:03:49.931 } 00:03:49.931 } 00:03:49.931 ]' 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:49.931 00:03:49.931 real 0m0.319s 00:03:49.931 user 0m0.186s 00:03:49.931 sys 0m0.047s 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.931 11:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.931 ************************************ 00:03:49.931 END TEST rpc_daemon_integrity 00:03:49.931 ************************************ 00:03:49.931 11:15:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:49.931 11:15:49 rpc -- rpc/rpc.sh@84 -- # killprocess 2243867 00:03:49.931 11:15:49 rpc -- common/autotest_common.sh@954 -- # '[' -z 2243867 ']' 00:03:49.931 11:15:49 rpc -- common/autotest_common.sh@958 -- # kill -0 2243867 00:03:49.931 11:15:49 rpc -- common/autotest_common.sh@959 -- # uname 00:03:49.931 11:15:49 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:49.931 11:15:49 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2243867 00:03:50.192 11:15:49 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:50.192 11:15:49 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:50.192 11:15:49 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2243867' 00:03:50.192 killing process with pid 2243867 00:03:50.192 11:15:49 rpc -- common/autotest_common.sh@973 -- # kill 2243867 00:03:50.192 11:15:49 rpc -- common/autotest_common.sh@978 -- # wait 2243867 00:03:51.577 00:03:51.577 real 0m4.158s 00:03:51.577 user 0m4.789s 00:03:51.577 sys 0m0.877s 00:03:51.577 11:15:50 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.577 11:15:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.577 ************************************ 00:03:51.577 END TEST rpc 00:03:51.577 ************************************ 00:03:51.838 11:15:50 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:51.838 11:15:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.838 11:15:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.838 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:03:51.838 ************************************ 00:03:51.838 START TEST skip_rpc 00:03:51.838 ************************************ 00:03:51.838 11:15:50 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:51.838 * Looking for test storage... 00:03:51.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.838 11:15:51 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:51.838 11:15:51 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:51.838 11:15:51 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:51.838 11:15:51 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.838 11:15:51 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:51.838 11:15:51 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.838 11:15:51 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:51.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.838 --rc genhtml_branch_coverage=1 00:03:51.838 --rc genhtml_function_coverage=1 00:03:51.838 --rc genhtml_legend=1 00:03:51.838 --rc geninfo_all_blocks=1 00:03:51.838 --rc geninfo_unexecuted_blocks=1 00:03:51.838 00:03:51.838 ' 00:03:51.838 11:15:51 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:51.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.838 --rc genhtml_branch_coverage=1 00:03:51.838 --rc genhtml_function_coverage=1 00:03:51.838 --rc genhtml_legend=1 00:03:51.838 --rc geninfo_all_blocks=1 00:03:51.838 --rc geninfo_unexecuted_blocks=1 00:03:51.838 00:03:51.838 ' 00:03:51.838 11:15:51 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:51.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.838 --rc genhtml_branch_coverage=1 00:03:51.838 --rc genhtml_function_coverage=1 00:03:51.838 --rc genhtml_legend=1 00:03:51.838 --rc geninfo_all_blocks=1 00:03:51.838 --rc geninfo_unexecuted_blocks=1 00:03:51.838 00:03:51.838 ' 00:03:52.099 11:15:51 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:52.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.099 --rc genhtml_branch_coverage=1 00:03:52.099 --rc genhtml_function_coverage=1 00:03:52.099 --rc genhtml_legend=1 00:03:52.099 --rc geninfo_all_blocks=1 00:03:52.099 --rc geninfo_unexecuted_blocks=1 00:03:52.099 00:03:52.099 ' 00:03:52.099 11:15:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:52.099 11:15:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:52.099 11:15:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:52.099 11:15:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.099 11:15:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.099 11:15:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.099 ************************************ 00:03:52.099 START TEST skip_rpc 00:03:52.099 ************************************ 00:03:52.099 11:15:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:52.099 11:15:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2245056 00:03:52.099 11:15:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.099 11:15:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:52.099 11:15:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:52.099 [2024-12-07 11:15:51.344518] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:03:52.099 [2024-12-07 11:15:51.344640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245056 ] 00:03:52.411 [2024-12-07 11:15:51.484602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.411 [2024-12-07 11:15:51.582567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2245056 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2245056 ']' 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2245056 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2245056 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2245056' 00:03:57.810 killing process with pid 2245056 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2245056 00:03:57.810 11:15:56 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2245056 00:03:58.746 00:03:58.746 real 0m6.683s 00:03:58.746 user 0m6.338s 00:03:58.746 sys 0m0.388s 00:03:58.746 11:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.746 11:15:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.746 ************************************ 00:03:58.746 END TEST skip_rpc 00:03:58.746 ************************************ 00:03:58.746 11:15:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:58.746 11:15:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.746 11:15:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.746 11:15:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.746 ************************************ 00:03:58.746 START TEST skip_rpc_with_json 00:03:58.746 ************************************ 00:03:58.746 11:15:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:58.746 11:15:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:58.746 11:15:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2246431 00:03:58.746 11:15:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.746 11:15:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2246431 00:03:58.746 11:15:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:58.746 11:15:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2246431 ']' 00:03:58.746 11:15:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.746 11:15:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:58.746 11:15:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.747 11:15:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:58.747 11:15:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.747 [2024-12-07 11:15:58.089238] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:03:58.747 [2024-12-07 11:15:58.089350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246431 ] 00:03:59.006 [2024-12-07 11:15:58.219525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.006 [2024-12-07 11:15:58.316081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.944 11:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:59.944 11:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:59.944 11:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:59.944 11:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.944 11:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.944 [2024-12-07 11:15:58.961498] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:59.944 request: 00:03:59.944 { 00:03:59.944 "trtype": "tcp", 00:03:59.944 "method": "nvmf_get_transports", 00:03:59.944 "req_id": 1 00:03:59.944 } 00:03:59.944 Got JSON-RPC error response 00:03:59.944 response: 00:03:59.944 { 00:03:59.944 "code": -19, 00:03:59.944 "message": "No such device" 00:03:59.944 } 00:03:59.944 11:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:59.944 11:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:59.944 11:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.944 11:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.944 [2024-12-07 11:15:58.973628] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:59.944 11:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.944 11:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:59.945 11:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.945 11:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.945 11:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.945 11:15:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:59.945 { 00:03:59.945 "subsystems": [ 00:03:59.945 { 00:03:59.945 "subsystem": "fsdev", 00:03:59.945 "config": [ 00:03:59.945 { 00:03:59.945 "method": "fsdev_set_opts", 00:03:59.945 "params": { 00:03:59.945 "fsdev_io_pool_size": 65535, 00:03:59.945 "fsdev_io_cache_size": 256 00:03:59.945 } 00:03:59.945 } 00:03:59.945 ] 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "keyring", 00:03:59.945 "config": [] 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "iobuf", 00:03:59.945 "config": [ 00:03:59.945 { 00:03:59.945 "method": "iobuf_set_options", 00:03:59.945 "params": { 00:03:59.945 "small_pool_count": 8192, 00:03:59.945 "large_pool_count": 1024, 00:03:59.945 "small_bufsize": 8192, 00:03:59.945 "large_bufsize": 135168, 00:03:59.945 "enable_numa": false 00:03:59.945 } 00:03:59.945 } 00:03:59.945 ] 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "sock", 00:03:59.945 "config": [ 00:03:59.945 { 00:03:59.945 "method": "sock_set_default_impl", 00:03:59.945 "params": { 00:03:59.945 "impl_name": "posix" 00:03:59.945 } 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "method": "sock_impl_set_options", 00:03:59.945 "params": { 00:03:59.945 "impl_name": "ssl", 00:03:59.945 "recv_buf_size": 4096, 00:03:59.945 "send_buf_size": 4096, 00:03:59.945 "enable_recv_pipe": true, 00:03:59.945 "enable_quickack": false, 00:03:59.945 "enable_placement_id": 0, 00:03:59.945 "enable_zerocopy_send_server": true, 00:03:59.945 "enable_zerocopy_send_client": false, 00:03:59.945 "zerocopy_threshold": 0, 00:03:59.945 "tls_version": 0, 00:03:59.945 "enable_ktls": false 00:03:59.945 } 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "method": "sock_impl_set_options", 00:03:59.945 "params": { 00:03:59.945 "impl_name": "posix", 00:03:59.945 "recv_buf_size": 2097152, 00:03:59.945 "send_buf_size": 2097152, 00:03:59.945 "enable_recv_pipe": true, 00:03:59.945 "enable_quickack": false, 00:03:59.945 "enable_placement_id": 0, 00:03:59.945 "enable_zerocopy_send_server": true, 00:03:59.945 "enable_zerocopy_send_client": false, 00:03:59.945 "zerocopy_threshold": 0, 00:03:59.945 "tls_version": 0, 00:03:59.945 "enable_ktls": false 00:03:59.945 } 00:03:59.945 } 00:03:59.945 ] 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "vmd", 00:03:59.945 "config": [] 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "accel", 00:03:59.945 "config": [ 00:03:59.945 { 00:03:59.945 "method": "accel_set_options", 00:03:59.945 "params": { 00:03:59.945 "small_cache_size": 128, 00:03:59.945 "large_cache_size": 16, 00:03:59.945 "task_count": 2048, 00:03:59.945 "sequence_count": 2048, 00:03:59.945 "buf_count": 2048 00:03:59.945 } 00:03:59.945 } 00:03:59.945 ] 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "bdev", 00:03:59.945 "config": [ 00:03:59.945 { 00:03:59.945 "method": "bdev_set_options", 00:03:59.945 "params": { 00:03:59.945 "bdev_io_pool_size": 65535, 00:03:59.945 "bdev_io_cache_size": 256, 00:03:59.945 "bdev_auto_examine": true, 00:03:59.945 "iobuf_small_cache_size": 128, 00:03:59.945 "iobuf_large_cache_size": 16 00:03:59.945 } 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "method": "bdev_raid_set_options", 00:03:59.945 "params": { 00:03:59.945 "process_window_size_kb": 1024, 00:03:59.945 "process_max_bandwidth_mb_sec": 0 00:03:59.945 } 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "method": "bdev_iscsi_set_options", 00:03:59.945 "params": { 00:03:59.945 "timeout_sec": 30 00:03:59.945 } 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "method": "bdev_nvme_set_options", 00:03:59.945 "params": { 00:03:59.945 "action_on_timeout": "none", 00:03:59.945 "timeout_us": 0, 00:03:59.945 "timeout_admin_us": 0, 00:03:59.945 "keep_alive_timeout_ms": 10000, 00:03:59.945 "arbitration_burst": 0, 00:03:59.945 "low_priority_weight": 0, 00:03:59.945 "medium_priority_weight": 0, 00:03:59.945 "high_priority_weight": 0, 00:03:59.945 "nvme_adminq_poll_period_us": 10000, 00:03:59.945 "nvme_ioq_poll_period_us": 0, 00:03:59.945 "io_queue_requests": 0, 00:03:59.945 "delay_cmd_submit": true, 00:03:59.945 "transport_retry_count": 4, 00:03:59.945 "bdev_retry_count": 3, 00:03:59.945 "transport_ack_timeout": 0, 00:03:59.945 "ctrlr_loss_timeout_sec": 0, 00:03:59.945 "reconnect_delay_sec": 0, 00:03:59.945 "fast_io_fail_timeout_sec": 0, 00:03:59.945 "disable_auto_failback": false, 00:03:59.945 "generate_uuids": false, 00:03:59.945 "transport_tos": 0, 00:03:59.945 "nvme_error_stat": false, 00:03:59.945 "rdma_srq_size": 0, 00:03:59.945 "io_path_stat": false, 00:03:59.945 "allow_accel_sequence": false, 00:03:59.945 "rdma_max_cq_size": 0, 00:03:59.945 "rdma_cm_event_timeout_ms": 0, 00:03:59.945 "dhchap_digests": [ 00:03:59.945 "sha256", 00:03:59.945 "sha384", 00:03:59.945 "sha512" 00:03:59.945 ], 00:03:59.945 "dhchap_dhgroups": [ 00:03:59.945 "null", 00:03:59.945 "ffdhe2048", 00:03:59.945 "ffdhe3072", 00:03:59.945 "ffdhe4096", 00:03:59.945 "ffdhe6144", 00:03:59.945 "ffdhe8192" 00:03:59.945 ] 00:03:59.945 } 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "method": "bdev_nvme_set_hotplug", 00:03:59.945 "params": { 00:03:59.945 "period_us": 100000, 00:03:59.945 "enable": false 00:03:59.945 } 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "method": "bdev_wait_for_examine" 00:03:59.945 } 00:03:59.945 ] 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "scsi", 00:03:59.945 "config": null 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "scheduler", 00:03:59.945 "config": [ 00:03:59.945 { 00:03:59.945 "method": "framework_set_scheduler", 00:03:59.945 "params": { 00:03:59.945 "name": "static" 00:03:59.945 } 00:03:59.945 } 00:03:59.945 ] 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "vhost_scsi", 00:03:59.945 "config": [] 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "vhost_blk", 00:03:59.945 "config": [] 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "ublk", 00:03:59.945 "config": [] 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "nbd", 00:03:59.945 "config": [] 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "nvmf", 00:03:59.945 "config": [ 00:03:59.945 { 00:03:59.945 "method": "nvmf_set_config", 00:03:59.945 "params": { 00:03:59.945 "discovery_filter": "match_any", 00:03:59.945 "admin_cmd_passthru": { 00:03:59.945 "identify_ctrlr": false 00:03:59.945 }, 00:03:59.945 "dhchap_digests": [ 00:03:59.945 "sha256", 00:03:59.945 "sha384", 00:03:59.945 "sha512" 00:03:59.945 ], 00:03:59.945 "dhchap_dhgroups": [ 00:03:59.945 "null", 00:03:59.945 "ffdhe2048", 00:03:59.945 "ffdhe3072", 00:03:59.945 "ffdhe4096", 00:03:59.945 "ffdhe6144", 00:03:59.945 "ffdhe8192" 00:03:59.945 ] 00:03:59.945 } 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "method": "nvmf_set_max_subsystems", 00:03:59.945 "params": { 00:03:59.945 "max_subsystems": 1024 00:03:59.945 } 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "method": "nvmf_set_crdt", 00:03:59.945 "params": { 00:03:59.945 "crdt1": 0, 00:03:59.945 "crdt2": 0, 00:03:59.945 "crdt3": 0 00:03:59.945 } 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "method": "nvmf_create_transport", 00:03:59.945 "params": { 00:03:59.945 "trtype": "TCP", 00:03:59.945 "max_queue_depth": 128, 00:03:59.945 "max_io_qpairs_per_ctrlr": 127, 00:03:59.945 "in_capsule_data_size": 4096, 00:03:59.945 "max_io_size": 131072, 00:03:59.945 "io_unit_size": 131072, 00:03:59.945 "max_aq_depth": 128, 00:03:59.945 "num_shared_buffers": 511, 00:03:59.945 "buf_cache_size": 4294967295, 00:03:59.945 "dif_insert_or_strip": false, 00:03:59.945 "zcopy": false, 00:03:59.945 "c2h_success": true, 00:03:59.945 "sock_priority": 0, 00:03:59.945 "abort_timeout_sec": 1, 00:03:59.945 "ack_timeout": 0, 00:03:59.945 "data_wr_pool_size": 0 00:03:59.945 } 00:03:59.945 } 00:03:59.945 ] 00:03:59.945 }, 00:03:59.945 { 00:03:59.945 "subsystem": "iscsi", 00:03:59.945 "config": [ 00:03:59.945 { 00:03:59.945 "method": "iscsi_set_options", 00:03:59.945 "params": { 00:03:59.945 "node_base": "iqn.2016-06.io.spdk", 00:03:59.945 "max_sessions": 128, 00:03:59.945 "max_connections_per_session": 2, 00:03:59.945 "max_queue_depth": 64, 00:03:59.945 "default_time2wait": 2, 00:03:59.945 "default_time2retain": 20, 00:03:59.946 "first_burst_length": 8192, 00:03:59.946 "immediate_data": true, 00:03:59.946 "allow_duplicated_isid": false, 00:03:59.946 "error_recovery_level": 0, 00:03:59.946 "nop_timeout": 60, 00:03:59.946 "nop_in_interval": 30, 00:03:59.946 "disable_chap": false, 00:03:59.946 "require_chap": false, 00:03:59.946 "mutual_chap": false, 00:03:59.946 "chap_group": 0, 00:03:59.946 "max_large_datain_per_connection": 64, 00:03:59.946 "max_r2t_per_connection": 4, 00:03:59.946 "pdu_pool_size": 36864, 00:03:59.946 "immediate_data_pool_size": 16384, 00:03:59.946 "data_out_pool_size": 2048 00:03:59.946 } 00:03:59.946 } 00:03:59.946 ] 00:03:59.946 } 00:03:59.946 ] 00:03:59.946 } 00:03:59.946 11:15:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:59.946 11:15:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2246431 00:03:59.946 11:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2246431 ']' 00:03:59.946 11:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2246431 00:03:59.946 11:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:59.946 11:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:59.946 11:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2246431 00:03:59.946 11:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:59.946 11:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:59.946 11:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2246431' 00:03:59.946 killing process with pid 2246431 00:03:59.946 11:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2246431 00:03:59.946 11:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2246431 00:04:01.859 11:16:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2246844 00:04:01.859 11:16:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:01.859 11:16:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:07.136 11:16:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2246844 00:04:07.136 11:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2246844 ']' 00:04:07.136 11:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2246844 00:04:07.136 11:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:07.136 11:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.136 11:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2246844 00:04:07.136 11:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.136 11:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.136 11:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2246844' 00:04:07.136 killing process with pid 2246844 00:04:07.137 11:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2246844 00:04:07.137 11:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2246844 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:08.523 00:04:08.523 real 0m9.509s 00:04:08.523 user 0m9.124s 00:04:08.523 sys 0m0.847s 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.523 ************************************ 00:04:08.523 END TEST skip_rpc_with_json 00:04:08.523 ************************************ 00:04:08.523 11:16:07 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:08.523 11:16:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.523 11:16:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.523 11:16:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.523 ************************************ 00:04:08.523 START TEST skip_rpc_with_delay 00:04:08.523 ************************************ 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:08.523 [2024-12-07 11:16:07.674539] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:08.523 00:04:08.523 real 0m0.163s 00:04:08.523 user 0m0.088s 00:04:08.523 sys 0m0.074s 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.523 11:16:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:08.523 ************************************ 00:04:08.523 END TEST skip_rpc_with_delay 00:04:08.523 ************************************ 00:04:08.523 11:16:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:08.523 11:16:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:08.523 11:16:07 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:08.523 11:16:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.523 11:16:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.523 11:16:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.523 ************************************ 00:04:08.523 START TEST exit_on_failed_rpc_init 00:04:08.523 ************************************ 00:04:08.523 11:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:08.523 11:16:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2248351 00:04:08.523 11:16:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2248351 00:04:08.523 11:16:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:08.523 11:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2248351 ']' 00:04:08.523 11:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.523 11:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.523 11:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.523 11:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.523 11:16:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:08.798 [2024-12-07 11:16:07.926217] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:08.798 [2024-12-07 11:16:07.926358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248351 ] 00:04:08.799 [2024-12-07 11:16:08.070155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.058 [2024-12-07 11:16:08.169514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:09.631 11:16:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:09.631 [2024-12-07 11:16:08.905860] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:09.631 [2024-12-07 11:16:08.905972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248533 ] 00:04:09.891 [2024-12-07 11:16:09.048499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.891 [2024-12-07 11:16:09.146257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.891 [2024-12-07 11:16:09.146330] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:09.891 [2024-12-07 11:16:09.146347] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:09.891 [2024-12-07 11:16:09.146358] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2248351 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2248351 ']' 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2248351 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2248351 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2248351' 00:04:10.151 killing process with pid 2248351 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2248351 00:04:10.151 11:16:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2248351 00:04:12.080 00:04:12.080 real 0m3.181s 00:04:12.080 user 0m3.496s 00:04:12.080 sys 0m0.606s 00:04:12.080 11:16:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.080 11:16:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.080 ************************************ 00:04:12.080 END TEST exit_on_failed_rpc_init 00:04:12.080 ************************************ 00:04:12.080 11:16:11 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:12.080 00:04:12.080 real 0m20.060s 00:04:12.080 user 0m19.278s 00:04:12.080 sys 0m2.235s 00:04:12.080 11:16:11 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.080 11:16:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.080 ************************************ 00:04:12.080 END TEST skip_rpc 00:04:12.080 ************************************ 00:04:12.080 11:16:11 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:12.080 11:16:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.080 11:16:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.080 11:16:11 -- common/autotest_common.sh@10 -- # set +x 00:04:12.080 ************************************ 00:04:12.080 START TEST rpc_client 00:04:12.080 ************************************ 00:04:12.080 11:16:11 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:12.080 * Looking for test storage... 00:04:12.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:12.080 11:16:11 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.080 11:16:11 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.080 11:16:11 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.080 11:16:11 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.080 11:16:11 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:12.080 11:16:11 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.080 11:16:11 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.080 --rc genhtml_branch_coverage=1 00:04:12.080 --rc genhtml_function_coverage=1 00:04:12.080 --rc genhtml_legend=1 00:04:12.080 --rc geninfo_all_blocks=1 00:04:12.080 --rc geninfo_unexecuted_blocks=1 00:04:12.080 00:04:12.080 ' 00:04:12.080 11:16:11 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.080 --rc genhtml_branch_coverage=1 00:04:12.080 --rc genhtml_function_coverage=1 00:04:12.080 --rc genhtml_legend=1 00:04:12.080 --rc geninfo_all_blocks=1 00:04:12.080 --rc geninfo_unexecuted_blocks=1 00:04:12.080 00:04:12.080 ' 00:04:12.080 11:16:11 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.080 --rc genhtml_branch_coverage=1 00:04:12.080 --rc genhtml_function_coverage=1 00:04:12.080 --rc genhtml_legend=1 00:04:12.080 --rc geninfo_all_blocks=1 00:04:12.080 --rc geninfo_unexecuted_blocks=1 00:04:12.080 00:04:12.080 ' 00:04:12.080 11:16:11 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.080 --rc genhtml_branch_coverage=1 00:04:12.080 --rc genhtml_function_coverage=1 00:04:12.080 --rc genhtml_legend=1 00:04:12.080 --rc geninfo_all_blocks=1 00:04:12.080 --rc geninfo_unexecuted_blocks=1 00:04:12.080 00:04:12.080 ' 00:04:12.080 11:16:11 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:12.080 OK 00:04:12.080 11:16:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:12.080 00:04:12.080 real 0m0.275s 00:04:12.080 user 0m0.153s 00:04:12.080 sys 0m0.135s 00:04:12.080 11:16:11 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.080 11:16:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:12.080 ************************************ 00:04:12.080 END TEST rpc_client 00:04:12.080 ************************************ 00:04:12.341 11:16:11 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:12.341 11:16:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.341 11:16:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.341 11:16:11 -- common/autotest_common.sh@10 -- # set +x 00:04:12.341 ************************************ 00:04:12.341 START TEST json_config 00:04:12.341 ************************************ 00:04:12.341 11:16:11 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:12.341 11:16:11 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.341 11:16:11 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.341 11:16:11 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.341 11:16:11 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.341 11:16:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.341 11:16:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.341 11:16:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.341 11:16:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.341 11:16:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.342 11:16:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.342 11:16:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.342 11:16:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.342 11:16:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.342 11:16:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.342 11:16:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.342 11:16:11 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:12.342 11:16:11 json_config -- scripts/common.sh@345 -- # : 1 00:04:12.342 11:16:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.342 11:16:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.342 11:16:11 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:12.342 11:16:11 json_config -- scripts/common.sh@353 -- # local d=1 00:04:12.342 11:16:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.342 11:16:11 json_config -- scripts/common.sh@355 -- # echo 1 00:04:12.342 11:16:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.342 11:16:11 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:12.342 11:16:11 json_config -- scripts/common.sh@353 -- # local d=2 00:04:12.342 11:16:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.342 11:16:11 json_config -- scripts/common.sh@355 -- # echo 2 00:04:12.342 11:16:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.342 11:16:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.342 11:16:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.342 11:16:11 json_config -- scripts/common.sh@368 -- # return 0 00:04:12.342 11:16:11 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.342 11:16:11 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.342 --rc genhtml_branch_coverage=1 00:04:12.342 --rc genhtml_function_coverage=1 00:04:12.342 --rc genhtml_legend=1 00:04:12.342 --rc geninfo_all_blocks=1 00:04:12.342 --rc geninfo_unexecuted_blocks=1 00:04:12.342 00:04:12.342 ' 00:04:12.342 11:16:11 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.342 --rc genhtml_branch_coverage=1 00:04:12.342 --rc genhtml_function_coverage=1 00:04:12.342 --rc genhtml_legend=1 00:04:12.342 --rc geninfo_all_blocks=1 00:04:12.342 --rc geninfo_unexecuted_blocks=1 00:04:12.342 00:04:12.342 ' 00:04:12.342 11:16:11 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.342 --rc genhtml_branch_coverage=1 00:04:12.342 --rc genhtml_function_coverage=1 00:04:12.342 --rc genhtml_legend=1 00:04:12.342 --rc geninfo_all_blocks=1 00:04:12.342 --rc geninfo_unexecuted_blocks=1 00:04:12.342 00:04:12.342 ' 00:04:12.342 11:16:11 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.342 --rc genhtml_branch_coverage=1 00:04:12.342 --rc genhtml_function_coverage=1 00:04:12.342 --rc genhtml_legend=1 00:04:12.342 --rc geninfo_all_blocks=1 00:04:12.342 --rc geninfo_unexecuted_blocks=1 00:04:12.342 00:04:12.342 ' 00:04:12.342 11:16:11 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:12.342 11:16:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:12.342 11:16:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:12.342 11:16:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:12.342 11:16:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:12.342 11:16:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.342 11:16:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.342 11:16:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.342 11:16:11 json_config -- paths/export.sh@5 -- # export PATH 00:04:12.342 11:16:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@51 -- # : 0 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:12.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:12.342 11:16:11 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:12.603 INFO: JSON configuration test init 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:12.603 11:16:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.603 11:16:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:12.603 11:16:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.603 11:16:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.603 11:16:11 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:12.603 11:16:11 json_config -- json_config/common.sh@9 -- # local app=target 00:04:12.603 11:16:11 json_config -- json_config/common.sh@10 -- # shift 00:04:12.603 11:16:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:12.603 11:16:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:12.603 11:16:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:12.603 11:16:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.603 11:16:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.603 11:16:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2249323 00:04:12.603 11:16:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:12.603 Waiting for target to run... 00:04:12.603 11:16:11 json_config -- json_config/common.sh@25 -- # waitforlisten 2249323 /var/tmp/spdk_tgt.sock 00:04:12.603 11:16:11 json_config -- common/autotest_common.sh@835 -- # '[' -z 2249323 ']' 00:04:12.603 11:16:11 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:12.603 11:16:11 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.603 11:16:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:12.603 11:16:11 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:12.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:12.603 11:16:11 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.603 11:16:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.603 [2024-12-07 11:16:11.804218] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:12.603 [2024-12-07 11:16:11.804331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249323 ] 00:04:12.863 [2024-12-07 11:16:12.148269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.122 [2024-12-07 11:16:12.246774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.382 11:16:12 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.382 11:16:12 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:13.382 11:16:12 json_config -- json_config/common.sh@26 -- # echo '' 00:04:13.382 00:04:13.382 11:16:12 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:13.382 11:16:12 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:13.382 11:16:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.382 11:16:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.382 11:16:12 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:13.382 11:16:12 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:13.382 11:16:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:13.382 11:16:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.382 11:16:12 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:13.382 11:16:12 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:13.382 11:16:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:14.323 11:16:13 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:14.323 11:16:13 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:14.323 11:16:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.323 11:16:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.323 11:16:13 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:14.323 11:16:13 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:14.323 11:16:13 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:14.323 11:16:13 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:14.323 11:16:13 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:14.323 11:16:13 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:14.323 11:16:13 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:14.324 11:16:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@54 -- # sort 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:14.583 11:16:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:14.583 11:16:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:14.583 11:16:13 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:14.584 11:16:13 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:14.584 11:16:13 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:14.584 11:16:13 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:14.584 11:16:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.584 11:16:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.584 11:16:13 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:14.584 11:16:13 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:14.584 11:16:13 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:14.584 11:16:13 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:14.584 11:16:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:14.843 MallocForNvmf0 00:04:14.843 11:16:14 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:14.843 11:16:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.103 MallocForNvmf1 00:04:15.103 11:16:14 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:15.103 11:16:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:15.103 [2024-12-07 11:16:14.427855] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.363 11:16:14 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:15.363 11:16:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:15.363 11:16:14 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:15.363 11:16:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:15.622 11:16:14 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:15.622 11:16:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:15.622 11:16:14 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:15.622 11:16:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:15.886 [2024-12-07 11:16:15.122292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:15.886 11:16:15 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:15.886 11:16:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.886 11:16:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.886 11:16:15 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:15.886 11:16:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.886 11:16:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.886 11:16:15 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:15.886 11:16:15 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:15.886 11:16:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:16.147 MallocBdevForConfigChangeCheck 00:04:16.147 11:16:15 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:16.147 11:16:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.147 11:16:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.147 11:16:15 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:16.147 11:16:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.405 11:16:15 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:16.405 INFO: shutting down applications... 00:04:16.405 11:16:15 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:16.405 11:16:15 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:16.405 11:16:15 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:16.405 11:16:15 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:16.974 Calling clear_iscsi_subsystem 00:04:16.974 Calling clear_nvmf_subsystem 00:04:16.974 Calling clear_nbd_subsystem 00:04:16.974 Calling clear_ublk_subsystem 00:04:16.974 Calling clear_vhost_blk_subsystem 00:04:16.974 Calling clear_vhost_scsi_subsystem 00:04:16.974 Calling clear_bdev_subsystem 00:04:16.974 11:16:16 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:16.974 11:16:16 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:16.974 11:16:16 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:16.974 11:16:16 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.974 11:16:16 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:16.974 11:16:16 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:17.233 11:16:16 json_config -- json_config/json_config.sh@352 -- # break 00:04:17.233 11:16:16 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:17.233 11:16:16 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:17.233 11:16:16 json_config -- json_config/common.sh@31 -- # local app=target 00:04:17.233 11:16:16 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:17.233 11:16:16 json_config -- json_config/common.sh@35 -- # [[ -n 2249323 ]] 00:04:17.233 11:16:16 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2249323 00:04:17.233 11:16:16 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:17.233 11:16:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:17.233 11:16:16 json_config -- json_config/common.sh@41 -- # kill -0 2249323 00:04:17.233 11:16:16 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:17.800 11:16:17 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:17.800 11:16:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:17.800 11:16:17 json_config -- json_config/common.sh@41 -- # kill -0 2249323 00:04:17.800 11:16:17 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:18.370 11:16:17 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:18.370 11:16:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:18.370 11:16:17 json_config -- json_config/common.sh@41 -- # kill -0 2249323 00:04:18.370 11:16:17 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:18.370 11:16:17 json_config -- json_config/common.sh@43 -- # break 00:04:18.370 11:16:17 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:18.370 11:16:17 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:18.370 SPDK target shutdown done 00:04:18.370 11:16:17 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:18.370 INFO: relaunching applications... 00:04:18.370 11:16:17 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.370 11:16:17 json_config -- json_config/common.sh@9 -- # local app=target 00:04:18.370 11:16:17 json_config -- json_config/common.sh@10 -- # shift 00:04:18.370 11:16:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:18.370 11:16:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:18.370 11:16:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:18.370 11:16:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.370 11:16:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.370 11:16:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2250476 00:04:18.370 11:16:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:18.370 Waiting for target to run... 00:04:18.370 11:16:17 json_config -- json_config/common.sh@25 -- # waitforlisten 2250476 /var/tmp/spdk_tgt.sock 00:04:18.370 11:16:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.370 11:16:17 json_config -- common/autotest_common.sh@835 -- # '[' -z 2250476 ']' 00:04:18.370 11:16:17 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:18.370 11:16:17 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.370 11:16:17 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:18.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:18.370 11:16:17 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.370 11:16:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.370 [2024-12-07 11:16:17.672360] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:18.370 [2024-12-07 11:16:17.672476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2250476 ] 00:04:18.943 [2024-12-07 11:16:18.058648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.943 [2024-12-07 11:16:18.158734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.881 [2024-12-07 11:16:19.168921] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:19.881 [2024-12-07 11:16:19.201338] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:20.139 11:16:19 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.139 11:16:19 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:20.139 11:16:19 json_config -- json_config/common.sh@26 -- # echo '' 00:04:20.139 00:04:20.139 11:16:19 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:20.139 11:16:19 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:20.139 INFO: Checking if target configuration is the same... 00:04:20.139 11:16:19 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.139 11:16:19 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:20.139 11:16:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.139 + '[' 2 -ne 2 ']' 00:04:20.139 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:20.139 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:20.139 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:20.139 +++ basename /dev/fd/62 00:04:20.139 ++ mktemp /tmp/62.XXX 00:04:20.139 + tmp_file_1=/tmp/62.kZy 00:04:20.139 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.139 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:20.139 + tmp_file_2=/tmp/spdk_tgt_config.json.F6m 00:04:20.139 + ret=0 00:04:20.139 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.399 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.399 + diff -u /tmp/62.kZy /tmp/spdk_tgt_config.json.F6m 00:04:20.399 + echo 'INFO: JSON config files are the same' 00:04:20.399 INFO: JSON config files are the same 00:04:20.399 + rm /tmp/62.kZy /tmp/spdk_tgt_config.json.F6m 00:04:20.399 + exit 0 00:04:20.399 11:16:19 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:20.399 11:16:19 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:20.399 INFO: changing configuration and checking if this can be detected... 00:04:20.399 11:16:19 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:20.399 11:16:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:20.659 11:16:19 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.660 11:16:19 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:20.660 11:16:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.660 + '[' 2 -ne 2 ']' 00:04:20.660 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:20.660 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:20.660 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:20.660 +++ basename /dev/fd/62 00:04:20.660 ++ mktemp /tmp/62.XXX 00:04:20.660 + tmp_file_1=/tmp/62.wTh 00:04:20.660 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.660 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:20.660 + tmp_file_2=/tmp/spdk_tgt_config.json.Es3 00:04:20.660 + ret=0 00:04:20.660 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.920 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.920 + diff -u /tmp/62.wTh /tmp/spdk_tgt_config.json.Es3 00:04:20.920 + ret=1 00:04:20.920 + echo '=== Start of file: /tmp/62.wTh ===' 00:04:20.920 + cat /tmp/62.wTh 00:04:20.920 + echo '=== End of file: /tmp/62.wTh ===' 00:04:20.920 + echo '' 00:04:20.920 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Es3 ===' 00:04:20.920 + cat /tmp/spdk_tgt_config.json.Es3 00:04:20.920 + echo '=== End of file: /tmp/spdk_tgt_config.json.Es3 ===' 00:04:20.920 + echo '' 00:04:20.920 + rm /tmp/62.wTh /tmp/spdk_tgt_config.json.Es3 00:04:20.920 + exit 1 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:20.920 INFO: configuration change detected. 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:20.920 11:16:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.920 11:16:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@324 -- # [[ -n 2250476 ]] 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:20.920 11:16:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.920 11:16:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:20.920 11:16:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.920 11:16:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.920 11:16:20 json_config -- json_config/json_config.sh@330 -- # killprocess 2250476 00:04:20.920 11:16:20 json_config -- common/autotest_common.sh@954 -- # '[' -z 2250476 ']' 00:04:20.920 11:16:20 json_config -- common/autotest_common.sh@958 -- # kill -0 2250476 00:04:20.920 11:16:20 json_config -- common/autotest_common.sh@959 -- # uname 00:04:20.920 11:16:20 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.920 11:16:20 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2250476 00:04:21.180 11:16:20 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.180 11:16:20 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.180 11:16:20 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2250476' 00:04:21.180 killing process with pid 2250476 00:04:21.180 11:16:20 json_config -- common/autotest_common.sh@973 -- # kill 2250476 00:04:21.180 11:16:20 json_config -- common/autotest_common.sh@978 -- # wait 2250476 00:04:21.751 11:16:21 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.011 11:16:21 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:22.011 11:16:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.011 11:16:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.011 11:16:21 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:22.012 11:16:21 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:22.012 INFO: Success 00:04:22.012 00:04:22.012 real 0m9.661s 00:04:22.012 user 0m10.918s 00:04:22.012 sys 0m2.327s 00:04:22.012 11:16:21 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.012 11:16:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.012 ************************************ 00:04:22.012 END TEST json_config 00:04:22.012 ************************************ 00:04:22.012 11:16:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:22.012 11:16:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.012 11:16:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.012 11:16:21 -- common/autotest_common.sh@10 -- # set +x 00:04:22.012 ************************************ 00:04:22.012 START TEST json_config_extra_key 00:04:22.012 ************************************ 00:04:22.012 11:16:21 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:22.012 11:16:21 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:22.012 11:16:21 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:22.012 11:16:21 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:22.273 11:16:21 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.273 11:16:21 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:22.273 11:16:21 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.273 11:16:21 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:22.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.273 --rc genhtml_branch_coverage=1 00:04:22.273 --rc genhtml_function_coverage=1 00:04:22.273 --rc genhtml_legend=1 00:04:22.273 --rc geninfo_all_blocks=1 00:04:22.273 --rc geninfo_unexecuted_blocks=1 00:04:22.273 00:04:22.273 ' 00:04:22.273 11:16:21 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:22.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.273 --rc genhtml_branch_coverage=1 00:04:22.273 --rc genhtml_function_coverage=1 00:04:22.273 --rc genhtml_legend=1 00:04:22.273 --rc geninfo_all_blocks=1 00:04:22.273 --rc geninfo_unexecuted_blocks=1 00:04:22.273 00:04:22.273 ' 00:04:22.273 11:16:21 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:22.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.273 --rc genhtml_branch_coverage=1 00:04:22.273 --rc genhtml_function_coverage=1 00:04:22.273 --rc genhtml_legend=1 00:04:22.273 --rc geninfo_all_blocks=1 00:04:22.273 --rc geninfo_unexecuted_blocks=1 00:04:22.273 00:04:22.273 ' 00:04:22.273 11:16:21 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:22.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.273 --rc genhtml_branch_coverage=1 00:04:22.273 --rc genhtml_function_coverage=1 00:04:22.273 --rc genhtml_legend=1 00:04:22.273 --rc geninfo_all_blocks=1 00:04:22.273 --rc geninfo_unexecuted_blocks=1 00:04:22.273 00:04:22.273 ' 00:04:22.273 11:16:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:22.273 11:16:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:22.273 11:16:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.273 11:16:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.273 11:16:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.273 11:16:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:22.274 11:16:21 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:22.274 11:16:21 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.274 11:16:21 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.274 11:16:21 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.274 11:16:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.274 11:16:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.274 11:16:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.274 11:16:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:22.274 11:16:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:22.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:22.274 11:16:21 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:22.274 11:16:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:22.274 11:16:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:22.274 11:16:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:22.274 11:16:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:22.274 11:16:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:22.274 11:16:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:22.274 11:16:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:22.274 11:16:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:22.274 11:16:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:22.274 11:16:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.274 11:16:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:22.274 INFO: launching applications... 00:04:22.274 11:16:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:22.274 11:16:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:22.274 11:16:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:22.274 11:16:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.274 11:16:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.274 11:16:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.274 11:16:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.274 11:16:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.274 11:16:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2251428 00:04:22.274 11:16:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.274 Waiting for target to run... 00:04:22.274 11:16:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2251428 /var/tmp/spdk_tgt.sock 00:04:22.274 11:16:21 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2251428 ']' 00:04:22.274 11:16:21 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.274 11:16:21 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:22.274 11:16:21 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.274 11:16:21 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.274 11:16:21 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.274 11:16:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:22.274 [2024-12-07 11:16:21.540393] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:22.274 [2024-12-07 11:16:21.540522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251428 ] 00:04:22.845 [2024-12-07 11:16:21.985227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.845 [2024-12-07 11:16:22.083863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.416 11:16:22 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.416 11:16:22 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:23.416 11:16:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:23.416 00:04:23.416 11:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:23.416 INFO: shutting down applications... 00:04:23.416 11:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:23.416 11:16:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:23.416 11:16:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:23.416 11:16:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2251428 ]] 00:04:23.416 11:16:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2251428 00:04:23.416 11:16:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:23.416 11:16:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.416 11:16:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2251428 00:04:23.416 11:16:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.986 11:16:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.986 11:16:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.986 11:16:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2251428 00:04:23.986 11:16:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.558 11:16:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.558 11:16:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.558 11:16:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2251428 00:04:24.558 11:16:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.818 11:16:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.818 11:16:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.818 11:16:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2251428 00:04:24.818 11:16:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.388 11:16:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.388 11:16:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.388 11:16:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2251428 00:04:25.388 11:16:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:25.388 11:16:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:25.388 11:16:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:25.388 11:16:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:25.388 SPDK target shutdown done 00:04:25.388 11:16:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:25.388 Success 00:04:25.388 00:04:25.388 real 0m3.438s 00:04:25.388 user 0m2.943s 00:04:25.388 sys 0m0.678s 00:04:25.388 11:16:24 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.389 11:16:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:25.389 ************************************ 00:04:25.389 END TEST json_config_extra_key 00:04:25.389 ************************************ 00:04:25.389 11:16:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:25.389 11:16:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.389 11:16:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.389 11:16:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.389 ************************************ 00:04:25.389 START TEST alias_rpc 00:04:25.389 ************************************ 00:04:25.389 11:16:24 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:25.650 * Looking for test storage... 00:04:25.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:25.650 11:16:24 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:25.650 11:16:24 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:25.650 11:16:24 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:25.650 11:16:24 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:25.650 11:16:24 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.650 11:16:24 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.650 11:16:24 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.650 11:16:24 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.650 11:16:24 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.650 11:16:24 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.650 11:16:24 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.650 11:16:24 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.651 11:16:24 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:25.651 11:16:24 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.651 11:16:24 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:25.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.651 --rc genhtml_branch_coverage=1 00:04:25.651 --rc genhtml_function_coverage=1 00:04:25.651 --rc genhtml_legend=1 00:04:25.651 --rc geninfo_all_blocks=1 00:04:25.651 --rc geninfo_unexecuted_blocks=1 00:04:25.651 00:04:25.651 ' 00:04:25.651 11:16:24 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:25.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.651 --rc genhtml_branch_coverage=1 00:04:25.651 --rc genhtml_function_coverage=1 00:04:25.651 --rc genhtml_legend=1 00:04:25.651 --rc geninfo_all_blocks=1 00:04:25.651 --rc geninfo_unexecuted_blocks=1 00:04:25.651 00:04:25.651 ' 00:04:25.651 11:16:24 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:25.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.651 --rc genhtml_branch_coverage=1 00:04:25.651 --rc genhtml_function_coverage=1 00:04:25.651 --rc genhtml_legend=1 00:04:25.651 --rc geninfo_all_blocks=1 00:04:25.651 --rc geninfo_unexecuted_blocks=1 00:04:25.651 00:04:25.651 ' 00:04:25.651 11:16:24 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:25.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.651 --rc genhtml_branch_coverage=1 00:04:25.651 --rc genhtml_function_coverage=1 00:04:25.651 --rc genhtml_legend=1 00:04:25.651 --rc geninfo_all_blocks=1 00:04:25.651 --rc geninfo_unexecuted_blocks=1 00:04:25.651 00:04:25.651 ' 00:04:25.651 11:16:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:25.651 11:16:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2252199 00:04:25.651 11:16:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2252199 00:04:25.651 11:16:24 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2252199 ']' 00:04:25.651 11:16:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.651 11:16:24 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.651 11:16:24 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.651 11:16:24 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.651 11:16:24 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.651 11:16:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.919 [2024-12-07 11:16:25.038408] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:25.919 [2024-12-07 11:16:25.038549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252199 ] 00:04:25.919 [2024-12-07 11:16:25.178524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.180 [2024-12-07 11:16:25.276783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.748 11:16:25 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.748 11:16:25 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:26.748 11:16:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:27.008 11:16:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2252199 00:04:27.008 11:16:26 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2252199 ']' 00:04:27.008 11:16:26 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2252199 00:04:27.008 11:16:26 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:27.008 11:16:26 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.008 11:16:26 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2252199 00:04:27.008 11:16:26 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.008 11:16:26 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.008 11:16:26 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2252199' 00:04:27.008 killing process with pid 2252199 00:04:27.008 11:16:26 alias_rpc -- common/autotest_common.sh@973 -- # kill 2252199 00:04:27.008 11:16:26 alias_rpc -- common/autotest_common.sh@978 -- # wait 2252199 00:04:28.918 00:04:28.918 real 0m3.062s 00:04:28.918 user 0m3.081s 00:04:28.918 sys 0m0.560s 00:04:28.918 11:16:27 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.918 11:16:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.918 ************************************ 00:04:28.918 END TEST alias_rpc 00:04:28.918 ************************************ 00:04:28.918 11:16:27 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:28.918 11:16:27 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:28.918 11:16:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.918 11:16:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.918 11:16:27 -- common/autotest_common.sh@10 -- # set +x 00:04:28.918 ************************************ 00:04:28.918 START TEST spdkcli_tcp 00:04:28.918 ************************************ 00:04:28.918 11:16:27 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:28.918 * Looking for test storage... 00:04:28.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:28.918 11:16:27 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:28.918 11:16:27 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:28.918 11:16:27 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.918 11:16:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:28.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.918 --rc genhtml_branch_coverage=1 00:04:28.918 --rc genhtml_function_coverage=1 00:04:28.918 --rc genhtml_legend=1 00:04:28.918 --rc geninfo_all_blocks=1 00:04:28.918 --rc geninfo_unexecuted_blocks=1 00:04:28.918 00:04:28.918 ' 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:28.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.918 --rc genhtml_branch_coverage=1 00:04:28.918 --rc genhtml_function_coverage=1 00:04:28.918 --rc genhtml_legend=1 00:04:28.918 --rc geninfo_all_blocks=1 00:04:28.918 --rc geninfo_unexecuted_blocks=1 00:04:28.918 00:04:28.918 ' 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:28.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.918 --rc genhtml_branch_coverage=1 00:04:28.918 --rc genhtml_function_coverage=1 00:04:28.918 --rc genhtml_legend=1 00:04:28.918 --rc geninfo_all_blocks=1 00:04:28.918 --rc geninfo_unexecuted_blocks=1 00:04:28.918 00:04:28.918 ' 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:28.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.918 --rc genhtml_branch_coverage=1 00:04:28.918 --rc genhtml_function_coverage=1 00:04:28.918 --rc genhtml_legend=1 00:04:28.918 --rc geninfo_all_blocks=1 00:04:28.918 --rc geninfo_unexecuted_blocks=1 00:04:28.918 00:04:28.918 ' 00:04:28.918 11:16:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:28.918 11:16:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:28.918 11:16:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:28.918 11:16:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:28.918 11:16:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:28.918 11:16:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:28.918 11:16:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:28.918 11:16:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2252815 00:04:28.918 11:16:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2252815 00:04:28.918 11:16:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2252815 ']' 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.918 11:16:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:28.918 [2024-12-07 11:16:28.187446] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:28.918 [2024-12-07 11:16:28.187588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252815 ] 00:04:29.178 [2024-12-07 11:16:28.331430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.178 [2024-12-07 11:16:28.432859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.178 [2024-12-07 11:16:28.432879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.747 11:16:29 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.747 11:16:29 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:29.747 11:16:29 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2253076 00:04:29.747 11:16:29 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:29.747 11:16:29 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:30.006 [ 00:04:30.006 "bdev_malloc_delete", 00:04:30.006 "bdev_malloc_create", 00:04:30.006 "bdev_null_resize", 00:04:30.006 "bdev_null_delete", 00:04:30.006 "bdev_null_create", 00:04:30.006 "bdev_nvme_cuse_unregister", 00:04:30.006 "bdev_nvme_cuse_register", 00:04:30.006 "bdev_opal_new_user", 00:04:30.006 "bdev_opal_set_lock_state", 00:04:30.006 "bdev_opal_delete", 00:04:30.006 "bdev_opal_get_info", 00:04:30.006 "bdev_opal_create", 00:04:30.006 "bdev_nvme_opal_revert", 00:04:30.006 "bdev_nvme_opal_init", 00:04:30.006 "bdev_nvme_send_cmd", 00:04:30.006 "bdev_nvme_set_keys", 00:04:30.006 "bdev_nvme_get_path_iostat", 00:04:30.006 "bdev_nvme_get_mdns_discovery_info", 00:04:30.006 "bdev_nvme_stop_mdns_discovery", 00:04:30.006 "bdev_nvme_start_mdns_discovery", 00:04:30.006 "bdev_nvme_set_multipath_policy", 00:04:30.006 "bdev_nvme_set_preferred_path", 00:04:30.006 "bdev_nvme_get_io_paths", 00:04:30.006 "bdev_nvme_remove_error_injection", 00:04:30.006 "bdev_nvme_add_error_injection", 00:04:30.006 "bdev_nvme_get_discovery_info", 00:04:30.006 "bdev_nvme_stop_discovery", 00:04:30.007 "bdev_nvme_start_discovery", 00:04:30.007 "bdev_nvme_get_controller_health_info", 00:04:30.007 "bdev_nvme_disable_controller", 00:04:30.007 "bdev_nvme_enable_controller", 00:04:30.007 "bdev_nvme_reset_controller", 00:04:30.007 "bdev_nvme_get_transport_statistics", 00:04:30.007 "bdev_nvme_apply_firmware", 00:04:30.007 "bdev_nvme_detach_controller", 00:04:30.007 "bdev_nvme_get_controllers", 00:04:30.007 "bdev_nvme_attach_controller", 00:04:30.007 "bdev_nvme_set_hotplug", 00:04:30.007 "bdev_nvme_set_options", 00:04:30.007 "bdev_passthru_delete", 00:04:30.007 "bdev_passthru_create", 00:04:30.007 "bdev_lvol_set_parent_bdev", 00:04:30.007 "bdev_lvol_set_parent", 00:04:30.007 "bdev_lvol_check_shallow_copy", 00:04:30.007 "bdev_lvol_start_shallow_copy", 00:04:30.007 "bdev_lvol_grow_lvstore", 00:04:30.007 "bdev_lvol_get_lvols", 00:04:30.007 "bdev_lvol_get_lvstores", 00:04:30.007 "bdev_lvol_delete", 00:04:30.007 "bdev_lvol_set_read_only", 00:04:30.007 "bdev_lvol_resize", 00:04:30.007 "bdev_lvol_decouple_parent", 00:04:30.007 "bdev_lvol_inflate", 00:04:30.007 "bdev_lvol_rename", 00:04:30.007 "bdev_lvol_clone_bdev", 00:04:30.007 "bdev_lvol_clone", 00:04:30.007 "bdev_lvol_snapshot", 00:04:30.007 "bdev_lvol_create", 00:04:30.007 "bdev_lvol_delete_lvstore", 00:04:30.007 "bdev_lvol_rename_lvstore", 00:04:30.007 "bdev_lvol_create_lvstore", 00:04:30.007 "bdev_raid_set_options", 00:04:30.007 "bdev_raid_remove_base_bdev", 00:04:30.007 "bdev_raid_add_base_bdev", 00:04:30.007 "bdev_raid_delete", 00:04:30.007 "bdev_raid_create", 00:04:30.007 "bdev_raid_get_bdevs", 00:04:30.007 "bdev_error_inject_error", 00:04:30.007 "bdev_error_delete", 00:04:30.007 "bdev_error_create", 00:04:30.007 "bdev_split_delete", 00:04:30.007 "bdev_split_create", 00:04:30.007 "bdev_delay_delete", 00:04:30.007 "bdev_delay_create", 00:04:30.007 "bdev_delay_update_latency", 00:04:30.007 "bdev_zone_block_delete", 00:04:30.007 "bdev_zone_block_create", 00:04:30.007 "blobfs_create", 00:04:30.007 "blobfs_detect", 00:04:30.007 "blobfs_set_cache_size", 00:04:30.007 "bdev_aio_delete", 00:04:30.007 "bdev_aio_rescan", 00:04:30.007 "bdev_aio_create", 00:04:30.007 "bdev_ftl_set_property", 00:04:30.007 "bdev_ftl_get_properties", 00:04:30.007 "bdev_ftl_get_stats", 00:04:30.007 "bdev_ftl_unmap", 00:04:30.007 "bdev_ftl_unload", 00:04:30.007 "bdev_ftl_delete", 00:04:30.007 "bdev_ftl_load", 00:04:30.007 "bdev_ftl_create", 00:04:30.007 "bdev_virtio_attach_controller", 00:04:30.007 "bdev_virtio_scsi_get_devices", 00:04:30.007 "bdev_virtio_detach_controller", 00:04:30.007 "bdev_virtio_blk_set_hotplug", 00:04:30.007 "bdev_iscsi_delete", 00:04:30.007 "bdev_iscsi_create", 00:04:30.007 "bdev_iscsi_set_options", 00:04:30.007 "accel_error_inject_error", 00:04:30.007 "ioat_scan_accel_module", 00:04:30.007 "dsa_scan_accel_module", 00:04:30.007 "iaa_scan_accel_module", 00:04:30.007 "keyring_file_remove_key", 00:04:30.007 "keyring_file_add_key", 00:04:30.007 "keyring_linux_set_options", 00:04:30.007 "fsdev_aio_delete", 00:04:30.007 "fsdev_aio_create", 00:04:30.007 "iscsi_get_histogram", 00:04:30.007 "iscsi_enable_histogram", 00:04:30.007 "iscsi_set_options", 00:04:30.007 "iscsi_get_auth_groups", 00:04:30.007 "iscsi_auth_group_remove_secret", 00:04:30.007 "iscsi_auth_group_add_secret", 00:04:30.007 "iscsi_delete_auth_group", 00:04:30.007 "iscsi_create_auth_group", 00:04:30.007 "iscsi_set_discovery_auth", 00:04:30.007 "iscsi_get_options", 00:04:30.007 "iscsi_target_node_request_logout", 00:04:30.007 "iscsi_target_node_set_redirect", 00:04:30.007 "iscsi_target_node_set_auth", 00:04:30.007 "iscsi_target_node_add_lun", 00:04:30.007 "iscsi_get_stats", 00:04:30.007 "iscsi_get_connections", 00:04:30.007 "iscsi_portal_group_set_auth", 00:04:30.007 "iscsi_start_portal_group", 00:04:30.007 "iscsi_delete_portal_group", 00:04:30.007 "iscsi_create_portal_group", 00:04:30.007 "iscsi_get_portal_groups", 00:04:30.007 "iscsi_delete_target_node", 00:04:30.007 "iscsi_target_node_remove_pg_ig_maps", 00:04:30.007 "iscsi_target_node_add_pg_ig_maps", 00:04:30.007 "iscsi_create_target_node", 00:04:30.007 "iscsi_get_target_nodes", 00:04:30.007 "iscsi_delete_initiator_group", 00:04:30.007 "iscsi_initiator_group_remove_initiators", 00:04:30.007 "iscsi_initiator_group_add_initiators", 00:04:30.007 "iscsi_create_initiator_group", 00:04:30.007 "iscsi_get_initiator_groups", 00:04:30.007 "nvmf_set_crdt", 00:04:30.007 "nvmf_set_config", 00:04:30.007 "nvmf_set_max_subsystems", 00:04:30.007 "nvmf_stop_mdns_prr", 00:04:30.007 "nvmf_publish_mdns_prr", 00:04:30.007 "nvmf_subsystem_get_listeners", 00:04:30.007 "nvmf_subsystem_get_qpairs", 00:04:30.007 "nvmf_subsystem_get_controllers", 00:04:30.007 "nvmf_get_stats", 00:04:30.007 "nvmf_get_transports", 00:04:30.007 "nvmf_create_transport", 00:04:30.007 "nvmf_get_targets", 00:04:30.007 "nvmf_delete_target", 00:04:30.007 "nvmf_create_target", 00:04:30.007 "nvmf_subsystem_allow_any_host", 00:04:30.007 "nvmf_subsystem_set_keys", 00:04:30.007 "nvmf_subsystem_remove_host", 00:04:30.007 "nvmf_subsystem_add_host", 00:04:30.007 "nvmf_ns_remove_host", 00:04:30.007 "nvmf_ns_add_host", 00:04:30.007 "nvmf_subsystem_remove_ns", 00:04:30.007 "nvmf_subsystem_set_ns_ana_group", 00:04:30.007 "nvmf_subsystem_add_ns", 00:04:30.007 "nvmf_subsystem_listener_set_ana_state", 00:04:30.007 "nvmf_discovery_get_referrals", 00:04:30.007 "nvmf_discovery_remove_referral", 00:04:30.007 "nvmf_discovery_add_referral", 00:04:30.007 "nvmf_subsystem_remove_listener", 00:04:30.007 "nvmf_subsystem_add_listener", 00:04:30.007 "nvmf_delete_subsystem", 00:04:30.007 "nvmf_create_subsystem", 00:04:30.007 "nvmf_get_subsystems", 00:04:30.007 "env_dpdk_get_mem_stats", 00:04:30.007 "nbd_get_disks", 00:04:30.007 "nbd_stop_disk", 00:04:30.007 "nbd_start_disk", 00:04:30.007 "ublk_recover_disk", 00:04:30.007 "ublk_get_disks", 00:04:30.007 "ublk_stop_disk", 00:04:30.007 "ublk_start_disk", 00:04:30.007 "ublk_destroy_target", 00:04:30.007 "ublk_create_target", 00:04:30.007 "virtio_blk_create_transport", 00:04:30.007 "virtio_blk_get_transports", 00:04:30.007 "vhost_controller_set_coalescing", 00:04:30.007 "vhost_get_controllers", 00:04:30.007 "vhost_delete_controller", 00:04:30.007 "vhost_create_blk_controller", 00:04:30.007 "vhost_scsi_controller_remove_target", 00:04:30.007 "vhost_scsi_controller_add_target", 00:04:30.007 "vhost_start_scsi_controller", 00:04:30.007 "vhost_create_scsi_controller", 00:04:30.007 "thread_set_cpumask", 00:04:30.007 "scheduler_set_options", 00:04:30.007 "framework_get_governor", 00:04:30.007 "framework_get_scheduler", 00:04:30.007 "framework_set_scheduler", 00:04:30.007 "framework_get_reactors", 00:04:30.007 "thread_get_io_channels", 00:04:30.007 "thread_get_pollers", 00:04:30.007 "thread_get_stats", 00:04:30.007 "framework_monitor_context_switch", 00:04:30.007 "spdk_kill_instance", 00:04:30.007 "log_enable_timestamps", 00:04:30.007 "log_get_flags", 00:04:30.007 "log_clear_flag", 00:04:30.007 "log_set_flag", 00:04:30.007 "log_get_level", 00:04:30.007 "log_set_level", 00:04:30.007 "log_get_print_level", 00:04:30.007 "log_set_print_level", 00:04:30.007 "framework_enable_cpumask_locks", 00:04:30.007 "framework_disable_cpumask_locks", 00:04:30.007 "framework_wait_init", 00:04:30.007 "framework_start_init", 00:04:30.007 "scsi_get_devices", 00:04:30.007 "bdev_get_histogram", 00:04:30.007 "bdev_enable_histogram", 00:04:30.007 "bdev_set_qos_limit", 00:04:30.007 "bdev_set_qd_sampling_period", 00:04:30.007 "bdev_get_bdevs", 00:04:30.007 "bdev_reset_iostat", 00:04:30.007 "bdev_get_iostat", 00:04:30.007 "bdev_examine", 00:04:30.007 "bdev_wait_for_examine", 00:04:30.007 "bdev_set_options", 00:04:30.007 "accel_get_stats", 00:04:30.007 "accel_set_options", 00:04:30.007 "accel_set_driver", 00:04:30.007 "accel_crypto_key_destroy", 00:04:30.007 "accel_crypto_keys_get", 00:04:30.007 "accel_crypto_key_create", 00:04:30.007 "accel_assign_opc", 00:04:30.007 "accel_get_module_info", 00:04:30.007 "accel_get_opc_assignments", 00:04:30.007 "vmd_rescan", 00:04:30.007 "vmd_remove_device", 00:04:30.007 "vmd_enable", 00:04:30.007 "sock_get_default_impl", 00:04:30.007 "sock_set_default_impl", 00:04:30.007 "sock_impl_set_options", 00:04:30.007 "sock_impl_get_options", 00:04:30.007 "iobuf_get_stats", 00:04:30.007 "iobuf_set_options", 00:04:30.007 "keyring_get_keys", 00:04:30.007 "framework_get_pci_devices", 00:04:30.007 "framework_get_config", 00:04:30.007 "framework_get_subsystems", 00:04:30.007 "fsdev_set_opts", 00:04:30.007 "fsdev_get_opts", 00:04:30.007 "trace_get_info", 00:04:30.007 "trace_get_tpoint_group_mask", 00:04:30.007 "trace_disable_tpoint_group", 00:04:30.007 "trace_enable_tpoint_group", 00:04:30.007 "trace_clear_tpoint_mask", 00:04:30.007 "trace_set_tpoint_mask", 00:04:30.007 "notify_get_notifications", 00:04:30.007 "notify_get_types", 00:04:30.008 "spdk_get_version", 00:04:30.008 "rpc_get_methods" 00:04:30.008 ] 00:04:30.008 11:16:29 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:30.008 11:16:29 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.008 11:16:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.008 11:16:29 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:30.008 11:16:29 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2252815 00:04:30.008 11:16:29 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2252815 ']' 00:04:30.008 11:16:29 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2252815 00:04:30.008 11:16:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:30.008 11:16:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.008 11:16:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2252815 00:04:30.008 11:16:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.008 11:16:29 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.008 11:16:29 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2252815' 00:04:30.008 killing process with pid 2252815 00:04:30.008 11:16:29 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2252815 00:04:30.008 11:16:29 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2252815 00:04:31.914 00:04:31.914 real 0m3.088s 00:04:31.914 user 0m5.431s 00:04:31.914 sys 0m0.596s 00:04:31.914 11:16:30 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.914 11:16:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.914 ************************************ 00:04:31.914 END TEST spdkcli_tcp 00:04:31.914 ************************************ 00:04:31.914 11:16:31 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:31.914 11:16:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.914 11:16:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.914 11:16:31 -- common/autotest_common.sh@10 -- # set +x 00:04:31.914 ************************************ 00:04:31.914 START TEST dpdk_mem_utility 00:04:31.914 ************************************ 00:04:31.914 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:31.914 * Looking for test storage... 00:04:31.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:31.914 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:31.914 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:31.914 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:31.914 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.914 11:16:31 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.915 11:16:31 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:31.915 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.915 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:31.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.915 --rc genhtml_branch_coverage=1 00:04:31.915 --rc genhtml_function_coverage=1 00:04:31.915 --rc genhtml_legend=1 00:04:31.915 --rc geninfo_all_blocks=1 00:04:31.915 --rc geninfo_unexecuted_blocks=1 00:04:31.915 00:04:31.915 ' 00:04:31.915 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:31.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.915 --rc genhtml_branch_coverage=1 00:04:31.915 --rc genhtml_function_coverage=1 00:04:31.915 --rc genhtml_legend=1 00:04:31.915 --rc geninfo_all_blocks=1 00:04:31.915 --rc geninfo_unexecuted_blocks=1 00:04:31.915 00:04:31.915 ' 00:04:31.915 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:31.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.915 --rc genhtml_branch_coverage=1 00:04:31.915 --rc genhtml_function_coverage=1 00:04:31.915 --rc genhtml_legend=1 00:04:31.915 --rc geninfo_all_blocks=1 00:04:31.915 --rc geninfo_unexecuted_blocks=1 00:04:31.915 00:04:31.915 ' 00:04:31.915 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:31.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.915 --rc genhtml_branch_coverage=1 00:04:31.915 --rc genhtml_function_coverage=1 00:04:31.915 --rc genhtml_legend=1 00:04:31.915 --rc geninfo_all_blocks=1 00:04:31.915 --rc geninfo_unexecuted_blocks=1 00:04:31.915 00:04:31.915 ' 00:04:31.915 11:16:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:31.915 11:16:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2253491 00:04:31.915 11:16:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2253491 00:04:31.915 11:16:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:31.915 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2253491 ']' 00:04:31.915 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.915 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.915 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.915 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.915 11:16:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.176 [2024-12-07 11:16:31.327480] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:32.176 [2024-12-07 11:16:31.327592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253491 ] 00:04:32.176 [2024-12-07 11:16:31.455602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.437 [2024-12-07 11:16:31.552082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.009 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.009 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:33.009 11:16:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:33.009 11:16:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:33.009 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.009 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:33.009 { 00:04:33.009 "filename": "/tmp/spdk_mem_dump.txt" 00:04:33.009 } 00:04:33.009 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.009 11:16:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:33.009 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:33.009 1 heaps totaling size 824.000000 MiB 00:04:33.009 size: 824.000000 MiB heap id: 0 00:04:33.009 end heaps---------- 00:04:33.009 9 mempools totaling size 603.782043 MiB 00:04:33.009 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:33.009 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:33.009 size: 100.555481 MiB name: bdev_io_2253491 00:04:33.009 size: 50.003479 MiB name: msgpool_2253491 00:04:33.009 size: 36.509338 MiB name: fsdev_io_2253491 00:04:33.009 size: 21.763794 MiB name: PDU_Pool 00:04:33.009 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:33.009 size: 4.133484 MiB name: evtpool_2253491 00:04:33.009 size: 0.026123 MiB name: Session_Pool 00:04:33.009 end mempools------- 00:04:33.009 6 memzones totaling size 4.142822 MiB 00:04:33.009 size: 1.000366 MiB name: RG_ring_0_2253491 00:04:33.009 size: 1.000366 MiB name: RG_ring_1_2253491 00:04:33.009 size: 1.000366 MiB name: RG_ring_4_2253491 00:04:33.009 size: 1.000366 MiB name: RG_ring_5_2253491 00:04:33.009 size: 0.125366 MiB name: RG_ring_2_2253491 00:04:33.009 size: 0.015991 MiB name: RG_ring_3_2253491 00:04:33.009 end memzones------- 00:04:33.009 11:16:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:33.009 heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19 00:04:33.009 list of free elements. size: 16.847595 MiB 00:04:33.009 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:33.009 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:33.009 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:33.009 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:33.009 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:33.009 element at address: 0x200019a00000 with size: 0.999329 MiB 00:04:33.009 element at address: 0x200000400000 with size: 0.998108 MiB 00:04:33.009 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:33.009 element at address: 0x200019200000 with size: 0.959900 MiB 00:04:33.009 element at address: 0x200019d00040 with size: 0.937256 MiB 00:04:33.009 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:33.009 element at address: 0x20001b400000 with size: 0.583191 MiB 00:04:33.009 element at address: 0x200000c00000 with size: 0.495300 MiB 00:04:33.009 element at address: 0x200019600000 with size: 0.491150 MiB 00:04:33.009 element at address: 0x200019e00000 with size: 0.485657 MiB 00:04:33.009 element at address: 0x200012c00000 with size: 0.436157 MiB 00:04:33.009 element at address: 0x200028800000 with size: 0.411072 MiB 00:04:33.009 element at address: 0x200000800000 with size: 0.355286 MiB 00:04:33.009 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:04:33.009 list of standard malloc elements. size: 199.221497 MiB 00:04:33.009 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:33.009 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:33.009 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:33.009 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:33.009 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:33.009 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:33.009 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:33.009 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:33.009 element at address: 0x200012bff040 with size: 0.000427 MiB 00:04:33.009 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:04:33.009 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:33.009 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:33.009 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:33.009 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:33.009 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:04:33.009 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:33.009 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:33.009 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:33.009 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:33.009 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:33.009 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:33.009 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:33.009 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:33.009 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:04:33.009 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:04:33.009 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:04:33.009 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:04:33.009 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:04:33.009 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:04:33.009 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:33.009 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:33.009 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:33.009 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:33.009 element at address: 0x200012bff200 with size: 0.000244 MiB 00:04:33.009 element at address: 0x200012bff300 with size: 0.000244 MiB 00:04:33.009 element at address: 0x200012bff400 with size: 0.000244 MiB 00:04:33.009 element at address: 0x200012bff500 with size: 0.000244 MiB 00:04:33.009 element at address: 0x200012bff600 with size: 0.000244 MiB 00:04:33.009 element at address: 0x200012bff700 with size: 0.000244 MiB 00:04:33.009 element at address: 0x200012bff800 with size: 0.000244 MiB 00:04:33.009 element at address: 0x200012bff900 with size: 0.000244 MiB 00:04:33.009 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:33.009 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:33.009 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:33.009 list of memzone associated elements. size: 607.930908 MiB 00:04:33.009 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:33.009 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:33.009 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:33.009 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:33.009 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:33.009 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2253491_0 00:04:33.009 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:33.009 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2253491_0 00:04:33.009 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:33.009 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2253491_0 00:04:33.009 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:33.009 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:33.009 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:33.009 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:33.009 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:33.009 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2253491_0 00:04:33.009 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:33.009 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2253491 00:04:33.009 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:33.009 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2253491 00:04:33.009 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:33.009 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:33.009 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:33.010 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:33.010 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:33.010 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:33.010 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:33.010 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:33.010 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:33.010 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2253491 00:04:33.010 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:33.010 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2253491 00:04:33.010 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:33.010 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2253491 00:04:33.010 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:33.010 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2253491 00:04:33.010 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:33.010 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2253491 00:04:33.010 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:33.010 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2253491 00:04:33.010 element at address: 0x20001967dbc0 with size: 0.500549 MiB 00:04:33.010 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:33.010 element at address: 0x200012c6fa80 with size: 0.500549 MiB 00:04:33.010 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:33.010 element at address: 0x200019e7c540 with size: 0.250549 MiB 00:04:33.010 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:33.010 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:33.010 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2253491 00:04:33.010 element at address: 0x20000085f180 with size: 0.125549 MiB 00:04:33.010 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2253491 00:04:33.010 element at address: 0x2000192f5bc0 with size: 0.031799 MiB 00:04:33.010 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:33.010 element at address: 0x2000288693c0 with size: 0.023804 MiB 00:04:33.010 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:33.010 element at address: 0x20000085af40 with size: 0.016174 MiB 00:04:33.010 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2253491 00:04:33.010 element at address: 0x20002886f540 with size: 0.002502 MiB 00:04:33.010 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:33.010 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:04:33.010 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2253491 00:04:33.010 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:33.010 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2253491 00:04:33.010 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:33.010 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2253491 00:04:33.010 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:04:33.010 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:33.010 11:16:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:33.010 11:16:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2253491 00:04:33.010 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2253491 ']' 00:04:33.010 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2253491 00:04:33.010 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:33.010 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.010 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2253491 00:04:33.270 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.270 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.270 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2253491' 00:04:33.270 killing process with pid 2253491 00:04:33.270 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2253491 00:04:33.270 11:16:32 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2253491 00:04:34.655 00:04:34.655 real 0m2.942s 00:04:34.655 user 0m2.911s 00:04:34.655 sys 0m0.507s 00:04:34.655 11:16:33 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.655 11:16:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.655 ************************************ 00:04:34.655 END TEST dpdk_mem_utility 00:04:34.655 ************************************ 00:04:34.917 11:16:34 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:34.917 11:16:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.917 11:16:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.917 11:16:34 -- common/autotest_common.sh@10 -- # set +x 00:04:34.917 ************************************ 00:04:34.917 START TEST event 00:04:34.917 ************************************ 00:04:34.917 11:16:34 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:34.917 * Looking for test storage... 00:04:34.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:34.917 11:16:34 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:34.917 11:16:34 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:34.917 11:16:34 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:34.917 11:16:34 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:34.917 11:16:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.917 11:16:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.917 11:16:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.917 11:16:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.917 11:16:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.917 11:16:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.917 11:16:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.917 11:16:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.917 11:16:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.917 11:16:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.917 11:16:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.917 11:16:34 event -- scripts/common.sh@344 -- # case "$op" in 00:04:34.917 11:16:34 event -- scripts/common.sh@345 -- # : 1 00:04:34.917 11:16:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.917 11:16:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.917 11:16:34 event -- scripts/common.sh@365 -- # decimal 1 00:04:34.917 11:16:34 event -- scripts/common.sh@353 -- # local d=1 00:04:34.917 11:16:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.917 11:16:34 event -- scripts/common.sh@355 -- # echo 1 00:04:34.917 11:16:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.917 11:16:34 event -- scripts/common.sh@366 -- # decimal 2 00:04:34.917 11:16:34 event -- scripts/common.sh@353 -- # local d=2 00:04:34.917 11:16:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.917 11:16:34 event -- scripts/common.sh@355 -- # echo 2 00:04:34.917 11:16:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.917 11:16:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.917 11:16:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.917 11:16:34 event -- scripts/common.sh@368 -- # return 0 00:04:34.917 11:16:34 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.917 11:16:34 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:34.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.917 --rc genhtml_branch_coverage=1 00:04:34.917 --rc genhtml_function_coverage=1 00:04:34.917 --rc genhtml_legend=1 00:04:34.917 --rc geninfo_all_blocks=1 00:04:34.917 --rc geninfo_unexecuted_blocks=1 00:04:34.917 00:04:34.917 ' 00:04:34.917 11:16:34 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:34.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.917 --rc genhtml_branch_coverage=1 00:04:34.917 --rc genhtml_function_coverage=1 00:04:34.917 --rc genhtml_legend=1 00:04:34.917 --rc geninfo_all_blocks=1 00:04:34.917 --rc geninfo_unexecuted_blocks=1 00:04:34.917 00:04:34.917 ' 00:04:34.917 11:16:34 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:34.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.917 --rc genhtml_branch_coverage=1 00:04:34.917 --rc genhtml_function_coverage=1 00:04:34.917 --rc genhtml_legend=1 00:04:34.917 --rc geninfo_all_blocks=1 00:04:34.917 --rc geninfo_unexecuted_blocks=1 00:04:34.917 00:04:34.917 ' 00:04:34.917 11:16:34 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:34.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.917 --rc genhtml_branch_coverage=1 00:04:34.917 --rc genhtml_function_coverage=1 00:04:34.917 --rc genhtml_legend=1 00:04:34.917 --rc geninfo_all_blocks=1 00:04:34.917 --rc geninfo_unexecuted_blocks=1 00:04:34.917 00:04:34.917 ' 00:04:34.917 11:16:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:34.917 11:16:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:34.917 11:16:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:34.917 11:16:34 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:34.917 11:16:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.917 11:16:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.178 ************************************ 00:04:35.179 START TEST event_perf 00:04:35.179 ************************************ 00:04:35.179 11:16:34 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:35.179 Running I/O for 1 seconds...[2024-12-07 11:16:34.335875] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:35.179 [2024-12-07 11:16:34.335977] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254229 ] 00:04:35.179 [2024-12-07 11:16:34.475857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:35.439 [2024-12-07 11:16:34.580096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.439 [2024-12-07 11:16:34.580332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:35.439 [2024-12-07 11:16:34.580444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.439 [2024-12-07 11:16:34.580468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:36.380 Running I/O for 1 seconds... 00:04:36.380 lcore 0: 200834 00:04:36.380 lcore 1: 200829 00:04:36.380 lcore 2: 200830 00:04:36.380 lcore 3: 200833 00:04:36.640 done. 00:04:36.640 00:04:36.640 real 0m1.465s 00:04:36.640 user 0m4.314s 00:04:36.640 sys 0m0.146s 00:04:36.640 11:16:35 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.640 11:16:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:36.640 ************************************ 00:04:36.640 END TEST event_perf 00:04:36.640 ************************************ 00:04:36.640 11:16:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:36.640 11:16:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:36.640 11:16:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.640 11:16:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.640 ************************************ 00:04:36.640 START TEST event_reactor 00:04:36.640 ************************************ 00:04:36.640 11:16:35 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:36.640 [2024-12-07 11:16:35.884263] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:36.640 [2024-12-07 11:16:35.884450] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254591 ] 00:04:36.901 [2024-12-07 11:16:36.029356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.901 [2024-12-07 11:16:36.127285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.283 test_start 00:04:38.283 oneshot 00:04:38.283 tick 100 00:04:38.283 tick 100 00:04:38.283 tick 250 00:04:38.283 tick 100 00:04:38.283 tick 100 00:04:38.283 tick 250 00:04:38.283 tick 100 00:04:38.283 tick 500 00:04:38.283 tick 100 00:04:38.283 tick 100 00:04:38.283 tick 250 00:04:38.283 tick 100 00:04:38.283 tick 100 00:04:38.283 test_end 00:04:38.283 00:04:38.283 real 0m1.459s 00:04:38.283 user 0m1.301s 00:04:38.283 sys 0m0.151s 00:04:38.283 11:16:37 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.283 11:16:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:38.283 ************************************ 00:04:38.283 END TEST event_reactor 00:04:38.283 ************************************ 00:04:38.283 11:16:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:38.283 11:16:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:38.283 11:16:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.283 11:16:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.283 ************************************ 00:04:38.283 START TEST event_reactor_perf 00:04:38.284 ************************************ 00:04:38.284 11:16:37 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:38.284 [2024-12-07 11:16:37.417912] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:38.284 [2024-12-07 11:16:37.418018] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254943 ] 00:04:38.284 [2024-12-07 11:16:37.553586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.544 [2024-12-07 11:16:37.651413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.485 test_start 00:04:39.485 test_end 00:04:39.485 Performance: 297173 events per second 00:04:39.485 00:04:39.485 real 0m1.447s 00:04:39.485 user 0m1.293s 00:04:39.485 sys 0m0.147s 00:04:39.485 11:16:38 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.485 11:16:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:39.485 ************************************ 00:04:39.485 END TEST event_reactor_perf 00:04:39.485 ************************************ 00:04:39.746 11:16:38 event -- event/event.sh@49 -- # uname -s 00:04:39.746 11:16:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:39.746 11:16:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:39.746 11:16:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.746 11:16:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.746 11:16:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.746 ************************************ 00:04:39.746 START TEST event_scheduler 00:04:39.746 ************************************ 00:04:39.746 11:16:38 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:39.746 * Looking for test storage... 00:04:39.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:39.746 11:16:38 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:39.746 11:16:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:39.746 11:16:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:39.746 11:16:39 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.746 11:16:39 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:39.746 11:16:39 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.746 11:16:39 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:39.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.746 --rc genhtml_branch_coverage=1 00:04:39.746 --rc genhtml_function_coverage=1 00:04:39.746 --rc genhtml_legend=1 00:04:39.746 --rc geninfo_all_blocks=1 00:04:39.746 --rc geninfo_unexecuted_blocks=1 00:04:39.746 00:04:39.746 ' 00:04:39.746 11:16:39 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:39.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.746 --rc genhtml_branch_coverage=1 00:04:39.746 --rc genhtml_function_coverage=1 00:04:39.746 --rc genhtml_legend=1 00:04:39.746 --rc geninfo_all_blocks=1 00:04:39.746 --rc geninfo_unexecuted_blocks=1 00:04:39.746 00:04:39.746 ' 00:04:39.746 11:16:39 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:39.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.746 --rc genhtml_branch_coverage=1 00:04:39.746 --rc genhtml_function_coverage=1 00:04:39.746 --rc genhtml_legend=1 00:04:39.746 --rc geninfo_all_blocks=1 00:04:39.746 --rc geninfo_unexecuted_blocks=1 00:04:39.746 00:04:39.746 ' 00:04:39.747 11:16:39 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:39.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.747 --rc genhtml_branch_coverage=1 00:04:39.747 --rc genhtml_function_coverage=1 00:04:39.747 --rc genhtml_legend=1 00:04:39.747 --rc geninfo_all_blocks=1 00:04:39.747 --rc geninfo_unexecuted_blocks=1 00:04:39.747 00:04:39.747 ' 00:04:39.747 11:16:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:39.747 11:16:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2255338 00:04:39.747 11:16:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.747 11:16:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2255338 00:04:39.747 11:16:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:39.747 11:16:39 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2255338 ']' 00:04:39.747 11:16:39 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.747 11:16:39 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.747 11:16:39 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.747 11:16:39 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.747 11:16:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.007 [2024-12-07 11:16:39.179852] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:40.007 [2024-12-07 11:16:39.179986] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255338 ] 00:04:40.007 [2024-12-07 11:16:39.303456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:40.312 [2024-12-07 11:16:39.385361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.312 [2024-12-07 11:16:39.385501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.312 [2024-12-07 11:16:39.385589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.312 [2024-12-07 11:16:39.385616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.953 11:16:39 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.953 11:16:39 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:40.953 11:16:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:40.953 11:16:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.953 11:16:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.953 [2024-12-07 11:16:39.959659] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:40.953 [2024-12-07 11:16:39.959682] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:40.953 [2024-12-07 11:16:39.959695] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:40.953 [2024-12-07 11:16:39.959701] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:40.953 [2024-12-07 11:16:39.959709] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:40.953 11:16:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.953 11:16:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:40.953 11:16:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.953 11:16:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.953 [2024-12-07 11:16:40.148663] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:40.953 11:16:40 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.953 11:16:40 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:40.953 11:16:40 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.954 11:16:40 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.954 11:16:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.954 ************************************ 00:04:40.954 START TEST scheduler_create_thread 00:04:40.954 ************************************ 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.954 2 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.954 3 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.954 4 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.954 5 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.954 6 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.954 7 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.954 8 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.954 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.216 9 00:04:41.216 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.216 11:16:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:41.216 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.216 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.476 10 00:04:41.476 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.476 11:16:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:41.476 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.476 11:16:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.861 11:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.861 11:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:42.861 11:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:42.861 11:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.861 11:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.800 11:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.800 11:16:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:43.800 11:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.800 11:16:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.369 11:16:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.369 11:16:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:44.369 11:16:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:44.369 11:16:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.369 11:16:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.307 11:16:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.307 00:04:45.307 real 0m4.226s 00:04:45.307 user 0m0.022s 00:04:45.307 sys 0m0.009s 00:04:45.307 11:16:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.307 11:16:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.308 ************************************ 00:04:45.308 END TEST scheduler_create_thread 00:04:45.308 ************************************ 00:04:45.308 11:16:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:45.308 11:16:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2255338 00:04:45.308 11:16:44 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2255338 ']' 00:04:45.308 11:16:44 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2255338 00:04:45.308 11:16:44 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:45.308 11:16:44 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.308 11:16:44 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2255338 00:04:45.308 11:16:44 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:45.308 11:16:44 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:45.308 11:16:44 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2255338' 00:04:45.308 killing process with pid 2255338 00:04:45.308 11:16:44 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2255338 00:04:45.308 11:16:44 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2255338 00:04:45.601 [2024-12-07 11:16:44.791670] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:46.172 00:04:46.172 real 0m6.518s 00:04:46.172 user 0m14.436s 00:04:46.172 sys 0m0.514s 00:04:46.172 11:16:45 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.172 11:16:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.172 ************************************ 00:04:46.172 END TEST event_scheduler 00:04:46.172 ************************************ 00:04:46.172 11:16:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:46.172 11:16:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:46.172 11:16:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.172 11:16:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.172 11:16:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.172 ************************************ 00:04:46.172 START TEST app_repeat 00:04:46.172 ************************************ 00:04:46.172 11:16:45 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2256736 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2256736' 00:04:46.172 Process app_repeat pid: 2256736 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:46.172 spdk_app_start Round 0 00:04:46.172 11:16:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2256736 /var/tmp/spdk-nbd.sock 00:04:46.172 11:16:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2256736 ']' 00:04:46.172 11:16:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.172 11:16:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.172 11:16:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.172 11:16:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.172 11:16:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.431 [2024-12-07 11:16:45.559980] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:46.431 [2024-12-07 11:16:45.560090] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256736 ] 00:04:46.431 [2024-12-07 11:16:45.688499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.690 [2024-12-07 11:16:45.787100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.690 [2024-12-07 11:16:45.787277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.259 11:16:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.259 11:16:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:47.259 11:16:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.259 Malloc0 00:04:47.259 11:16:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.520 Malloc1 00:04:47.520 11:16:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.520 11:16:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:47.780 /dev/nbd0 00:04:47.780 11:16:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:47.780 11:16:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:47.780 11:16:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:47.780 11:16:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:47.780 11:16:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:47.780 11:16:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:47.780 11:16:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:47.780 11:16:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:47.780 11:16:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:47.780 11:16:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:47.780 11:16:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.780 1+0 records in 00:04:47.780 1+0 records out 00:04:47.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251283 s, 16.3 MB/s 00:04:47.780 11:16:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.780 11:16:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:47.780 11:16:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.780 11:16:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:47.780 11:16:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:47.780 11:16:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.780 11:16:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.780 11:16:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.040 /dev/nbd1 00:04:48.040 11:16:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.040 11:16:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.040 1+0 records in 00:04:48.040 1+0 records out 00:04:48.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293913 s, 13.9 MB/s 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:48.040 11:16:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:48.040 11:16:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.040 11:16:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.040 11:16:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.040 11:16:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.040 11:16:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:48.301 { 00:04:48.301 "nbd_device": "/dev/nbd0", 00:04:48.301 "bdev_name": "Malloc0" 00:04:48.301 }, 00:04:48.301 { 00:04:48.301 "nbd_device": "/dev/nbd1", 00:04:48.301 "bdev_name": "Malloc1" 00:04:48.301 } 00:04:48.301 ]' 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:48.301 { 00:04:48.301 "nbd_device": "/dev/nbd0", 00:04:48.301 "bdev_name": "Malloc0" 00:04:48.301 }, 00:04:48.301 { 00:04:48.301 "nbd_device": "/dev/nbd1", 00:04:48.301 "bdev_name": "Malloc1" 00:04:48.301 } 00:04:48.301 ]' 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:48.301 /dev/nbd1' 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:48.301 /dev/nbd1' 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:48.301 256+0 records in 00:04:48.301 256+0 records out 00:04:48.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118189 s, 88.7 MB/s 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:48.301 256+0 records in 00:04:48.301 256+0 records out 00:04:48.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155919 s, 67.3 MB/s 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.301 11:16:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:48.301 256+0 records in 00:04:48.302 256+0 records out 00:04:48.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021964 s, 47.7 MB/s 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.302 11:16:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:48.563 11:16:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:48.563 11:16:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:48.563 11:16:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:48.563 11:16:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.563 11:16:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.563 11:16:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:48.563 11:16:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.563 11:16:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.563 11:16:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.563 11:16:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:48.824 11:16:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:48.824 11:16:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:48.824 11:16:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:48.824 11:16:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.824 11:16:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.824 11:16:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:48.824 11:16:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.824 11:16:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.824 11:16:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.824 11:16:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.824 11:16:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.824 11:16:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:48.824 11:16:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:48.824 11:16:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.824 11:16:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.824 11:16:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.824 11:16:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.824 11:16:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:48.824 11:16:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.824 11:16:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.824 11:16:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.824 11:16:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.824 11:16:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.824 11:16:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:49.395 11:16:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:49.965 [2024-12-07 11:16:49.277744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.225 [2024-12-07 11:16:49.368377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.225 [2024-12-07 11:16:49.368380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.225 [2024-12-07 11:16:49.506715] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.225 [2024-12-07 11:16:49.506765] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.136 11:16:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.136 11:16:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:52.136 spdk_app_start Round 1 00:04:52.136 11:16:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2256736 /var/tmp/spdk-nbd.sock 00:04:52.136 11:16:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2256736 ']' 00:04:52.136 11:16:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.136 11:16:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.136 11:16:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.136 11:16:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.136 11:16:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.397 11:16:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.397 11:16:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:52.397 11:16:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.657 Malloc0 00:04:52.657 11:16:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.918 Malloc1 00:04:52.918 11:16:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.918 /dev/nbd0 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.918 11:16:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.918 11:16:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:52.918 11:16:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.918 11:16:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.918 11:16:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.918 11:16:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:52.918 11:16:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.918 11:16:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.918 11:16:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.918 11:16:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.918 1+0 records in 00:04:52.918 1+0 records out 00:04:52.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221261 s, 18.5 MB/s 00:04:52.918 11:16:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:53.179 11:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.179 11:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.179 11:16:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.179 /dev/nbd1 00:04:53.179 11:16:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.179 11:16:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.179 1+0 records in 00:04:53.179 1+0 records out 00:04:53.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291206 s, 14.1 MB/s 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:53.179 11:16:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:53.179 11:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.179 11:16:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.179 11:16:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.179 11:16:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.179 11:16:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.440 { 00:04:53.440 "nbd_device": "/dev/nbd0", 00:04:53.440 "bdev_name": "Malloc0" 00:04:53.440 }, 00:04:53.440 { 00:04:53.440 "nbd_device": "/dev/nbd1", 00:04:53.440 "bdev_name": "Malloc1" 00:04:53.440 } 00:04:53.440 ]' 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.440 { 00:04:53.440 "nbd_device": "/dev/nbd0", 00:04:53.440 "bdev_name": "Malloc0" 00:04:53.440 }, 00:04:53.440 { 00:04:53.440 "nbd_device": "/dev/nbd1", 00:04:53.440 "bdev_name": "Malloc1" 00:04:53.440 } 00:04:53.440 ]' 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.440 /dev/nbd1' 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.440 /dev/nbd1' 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.440 256+0 records in 00:04:53.440 256+0 records out 00:04:53.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125494 s, 83.6 MB/s 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.440 256+0 records in 00:04:53.440 256+0 records out 00:04:53.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162881 s, 64.4 MB/s 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.440 256+0 records in 00:04:53.440 256+0 records out 00:04:53.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221694 s, 47.3 MB/s 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.440 11:16:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.701 11:16:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.961 11:16:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.961 11:16:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.961 11:16:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.961 11:16:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.961 11:16:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.961 11:16:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.961 11:16:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.961 11:16:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.961 11:16:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.961 11:16:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.961 11:16:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.222 11:16:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.222 11:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.222 11:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.222 11:16:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.222 11:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.222 11:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.222 11:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.222 11:16:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.222 11:16:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.222 11:16:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.222 11:16:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.222 11:16:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.222 11:16:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.482 11:16:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.424 [2024-12-07 11:16:54.508741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.424 [2024-12-07 11:16:54.599966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.424 [2024-12-07 11:16:54.599983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.424 [2024-12-07 11:16:54.738334] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.424 [2024-12-07 11:16:54.738382] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.347 11:16:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.347 11:16:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:57.347 spdk_app_start Round 2 00:04:57.347 11:16:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2256736 /var/tmp/spdk-nbd.sock 00:04:57.347 11:16:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2256736 ']' 00:04:57.347 11:16:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.347 11:16:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.347 11:16:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.347 11:16:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.347 11:16:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.608 11:16:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.608 11:16:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:57.608 11:16:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.868 Malloc0 00:04:57.868 11:16:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.128 Malloc1 00:04:58.128 11:16:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.128 11:16:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.128 11:16:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.129 11:16:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.129 11:16:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.129 11:16:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.129 11:16:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.129 11:16:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.129 11:16:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.129 11:16:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.129 11:16:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.129 11:16:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.129 11:16:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.129 11:16:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.129 11:16:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.129 11:16:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.129 /dev/nbd0 00:04:58.390 11:16:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.390 11:16:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.390 1+0 records in 00:04:58.390 1+0 records out 00:04:58.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240536 s, 17.0 MB/s 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:58.390 11:16:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.390 11:16:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.390 11:16:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.390 /dev/nbd1 00:04:58.390 11:16:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.390 11:16:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.390 1+0 records in 00:04:58.390 1+0 records out 00:04:58.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249504 s, 16.4 MB/s 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:58.390 11:16:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:58.390 11:16:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.390 11:16:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.390 11:16:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.390 11:16:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.390 11:16:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.651 { 00:04:58.651 "nbd_device": "/dev/nbd0", 00:04:58.651 "bdev_name": "Malloc0" 00:04:58.651 }, 00:04:58.651 { 00:04:58.651 "nbd_device": "/dev/nbd1", 00:04:58.651 "bdev_name": "Malloc1" 00:04:58.651 } 00:04:58.651 ]' 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.651 { 00:04:58.651 "nbd_device": "/dev/nbd0", 00:04:58.651 "bdev_name": "Malloc0" 00:04:58.651 }, 00:04:58.651 { 00:04:58.651 "nbd_device": "/dev/nbd1", 00:04:58.651 "bdev_name": "Malloc1" 00:04:58.651 } 00:04:58.651 ]' 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.651 /dev/nbd1' 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.651 /dev/nbd1' 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.651 256+0 records in 00:04:58.651 256+0 records out 00:04:58.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126738 s, 82.7 MB/s 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.651 256+0 records in 00:04:58.651 256+0 records out 00:04:58.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150551 s, 69.6 MB/s 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.651 11:16:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.911 256+0 records in 00:04:58.911 256+0 records out 00:04:58.911 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176038 s, 59.6 MB/s 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.911 11:16:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.171 11:16:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.171 11:16:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.171 11:16:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.171 11:16:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.171 11:16:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.171 11:16:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.171 11:16:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.171 11:16:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.171 11:16:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.171 11:16:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.171 11:16:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.430 11:16:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.430 11:16:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.430 11:16:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.430 11:16:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.430 11:16:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.430 11:16:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.430 11:16:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.430 11:16:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.430 11:16:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.430 11:16:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.430 11:16:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.430 11:16:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.430 11:16:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.690 11:16:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.629 [2024-12-07 11:16:59.754513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.629 [2024-12-07 11:16:59.845145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.629 [2024-12-07 11:16:59.845147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.890 [2024-12-07 11:16:59.983444] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.890 [2024-12-07 11:16:59.983493] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.802 11:17:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2256736 /var/tmp/spdk-nbd.sock 00:05:02.803 11:17:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2256736 ']' 00:05:02.803 11:17:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.803 11:17:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.803 11:17:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.803 11:17:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.803 11:17:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.803 11:17:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.803 11:17:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:02.803 11:17:02 event.app_repeat -- event/event.sh@39 -- # killprocess 2256736 00:05:02.803 11:17:02 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2256736 ']' 00:05:02.803 11:17:02 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2256736 00:05:02.803 11:17:02 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:02.803 11:17:02 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.803 11:17:02 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2256736 00:05:03.063 11:17:02 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.063 11:17:02 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.063 11:17:02 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2256736' 00:05:03.063 killing process with pid 2256736 00:05:03.064 11:17:02 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2256736 00:05:03.064 11:17:02 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2256736 00:05:03.634 spdk_app_start is called in Round 0. 00:05:03.634 Shutdown signal received, stop current app iteration 00:05:03.634 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:03.634 spdk_app_start is called in Round 1. 00:05:03.634 Shutdown signal received, stop current app iteration 00:05:03.634 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:03.634 spdk_app_start is called in Round 2. 00:05:03.634 Shutdown signal received, stop current app iteration 00:05:03.634 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:03.634 spdk_app_start is called in Round 3. 00:05:03.634 Shutdown signal received, stop current app iteration 00:05:03.634 11:17:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:03.634 11:17:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:03.634 00:05:03.634 real 0m17.377s 00:05:03.634 user 0m36.668s 00:05:03.634 sys 0m2.386s 00:05:03.634 11:17:02 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.634 11:17:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.634 ************************************ 00:05:03.634 END TEST app_repeat 00:05:03.634 ************************************ 00:05:03.634 11:17:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:03.634 11:17:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:03.634 11:17:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.634 11:17:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.634 11:17:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.634 ************************************ 00:05:03.634 START TEST cpu_locks 00:05:03.634 ************************************ 00:05:03.634 11:17:02 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:03.896 * Looking for test storage... 00:05:03.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:03.896 11:17:03 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.896 11:17:03 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.896 11:17:03 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.896 11:17:03 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.896 11:17:03 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:03.896 11:17:03 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.896 11:17:03 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.896 --rc genhtml_branch_coverage=1 00:05:03.897 --rc genhtml_function_coverage=1 00:05:03.897 --rc genhtml_legend=1 00:05:03.897 --rc geninfo_all_blocks=1 00:05:03.897 --rc geninfo_unexecuted_blocks=1 00:05:03.897 00:05:03.897 ' 00:05:03.897 11:17:03 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.897 --rc genhtml_branch_coverage=1 00:05:03.897 --rc genhtml_function_coverage=1 00:05:03.897 --rc genhtml_legend=1 00:05:03.897 --rc geninfo_all_blocks=1 00:05:03.897 --rc geninfo_unexecuted_blocks=1 00:05:03.897 00:05:03.897 ' 00:05:03.897 11:17:03 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.897 --rc genhtml_branch_coverage=1 00:05:03.897 --rc genhtml_function_coverage=1 00:05:03.897 --rc genhtml_legend=1 00:05:03.897 --rc geninfo_all_blocks=1 00:05:03.897 --rc geninfo_unexecuted_blocks=1 00:05:03.897 00:05:03.897 ' 00:05:03.897 11:17:03 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.897 --rc genhtml_branch_coverage=1 00:05:03.897 --rc genhtml_function_coverage=1 00:05:03.897 --rc genhtml_legend=1 00:05:03.897 --rc geninfo_all_blocks=1 00:05:03.897 --rc geninfo_unexecuted_blocks=1 00:05:03.897 00:05:03.897 ' 00:05:03.897 11:17:03 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:03.897 11:17:03 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:03.897 11:17:03 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:03.897 11:17:03 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:03.897 11:17:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.897 11:17:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.897 11:17:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.897 ************************************ 00:05:03.897 START TEST default_locks 00:05:03.897 ************************************ 00:05:03.897 11:17:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:03.897 11:17:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2260342 00:05:03.897 11:17:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2260342 00:05:03.897 11:17:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.897 11:17:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2260342 ']' 00:05:03.897 11:17:03 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.897 11:17:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.897 11:17:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.897 11:17:03 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.897 11:17:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.158 [2024-12-07 11:17:03.292861] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:04.158 [2024-12-07 11:17:03.292970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260342 ] 00:05:04.158 [2024-12-07 11:17:03.424082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.419 [2024-12-07 11:17:03.520797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.988 11:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.988 11:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:04.988 11:17:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2260342 00:05:04.989 11:17:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2260342 00:05:04.989 11:17:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.559 lslocks: write error 00:05:05.559 11:17:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2260342 00:05:05.559 11:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2260342 ']' 00:05:05.559 11:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2260342 00:05:05.559 11:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:05.559 11:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.559 11:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2260342 00:05:05.559 11:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.559 11:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.559 11:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2260342' 00:05:05.559 killing process with pid 2260342 00:05:05.559 11:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2260342 00:05:05.559 11:17:04 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2260342 00:05:07.472 11:17:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2260342 00:05:07.472 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:07.472 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2260342 00:05:07.472 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:07.472 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.472 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2260342 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2260342 ']' 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2260342) - No such process 00:05:07.473 ERROR: process (pid: 2260342) is no longer running 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:07.473 00:05:07.473 real 0m3.201s 00:05:07.473 user 0m3.179s 00:05:07.473 sys 0m0.704s 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.473 11:17:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.473 ************************************ 00:05:07.473 END TEST default_locks 00:05:07.473 ************************************ 00:05:07.473 11:17:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:07.473 11:17:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.473 11:17:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.473 11:17:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.473 ************************************ 00:05:07.473 START TEST default_locks_via_rpc 00:05:07.473 ************************************ 00:05:07.473 11:17:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:07.473 11:17:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2261052 00:05:07.473 11:17:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2261052 00:05:07.473 11:17:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.473 11:17:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2261052 ']' 00:05:07.473 11:17:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.473 11:17:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.473 11:17:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.473 11:17:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.473 11:17:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.473 [2024-12-07 11:17:06.569090] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:07.473 [2024-12-07 11:17:06.569209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261052 ] 00:05:07.473 [2024-12-07 11:17:06.707224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.473 [2024-12-07 11:17:06.805553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.415 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.415 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:08.415 11:17:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:08.415 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.415 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.415 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.415 11:17:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:08.415 11:17:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:08.415 11:17:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:08.416 11:17:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:08.416 11:17:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:08.416 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.416 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.416 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.416 11:17:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2261052 00:05:08.416 11:17:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2261052 00:05:08.416 11:17:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.678 11:17:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2261052 00:05:08.678 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2261052 ']' 00:05:08.678 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2261052 00:05:08.678 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:08.678 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.678 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2261052 00:05:08.678 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.678 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.678 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2261052' 00:05:08.678 killing process with pid 2261052 00:05:08.678 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2261052 00:05:08.678 11:17:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2261052 00:05:10.592 00:05:10.592 real 0m3.098s 00:05:10.592 user 0m3.065s 00:05:10.592 sys 0m0.690s 00:05:10.592 11:17:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.592 11:17:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.592 ************************************ 00:05:10.592 END TEST default_locks_via_rpc 00:05:10.592 ************************************ 00:05:10.592 11:17:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:10.593 11:17:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.593 11:17:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.593 11:17:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.593 ************************************ 00:05:10.593 START TEST non_locking_app_on_locked_coremask 00:05:10.593 ************************************ 00:05:10.593 11:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:10.593 11:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2261749 00:05:10.593 11:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2261749 /var/tmp/spdk.sock 00:05:10.593 11:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.593 11:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2261749 ']' 00:05:10.593 11:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.593 11:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.593 11:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.593 11:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.593 11:17:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.593 [2024-12-07 11:17:09.743194] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:10.593 [2024-12-07 11:17:09.743322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261749 ] 00:05:10.593 [2024-12-07 11:17:09.882718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.853 [2024-12-07 11:17:09.981379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.424 11:17:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.424 11:17:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:11.424 11:17:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2261877 00:05:11.424 11:17:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2261877 /var/tmp/spdk2.sock 00:05:11.424 11:17:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2261877 ']' 00:05:11.424 11:17:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:11.424 11:17:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.424 11:17:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.424 11:17:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.424 11:17:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.424 11:17:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.424 [2024-12-07 11:17:10.716693] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:11.424 [2024-12-07 11:17:10.716809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261877 ] 00:05:11.685 [2024-12-07 11:17:10.906123] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.685 [2024-12-07 11:17:10.906174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.944 [2024-12-07 11:17:11.100662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.329 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.329 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:13.329 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2261749 00:05:13.329 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2261749 00:05:13.329 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.590 lslocks: write error 00:05:13.590 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2261749 00:05:13.590 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2261749 ']' 00:05:13.590 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2261749 00:05:13.590 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:13.590 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.590 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2261749 00:05:13.591 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.591 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.591 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2261749' 00:05:13.591 killing process with pid 2261749 00:05:13.591 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2261749 00:05:13.591 11:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2261749 00:05:16.907 11:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2261877 00:05:16.907 11:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2261877 ']' 00:05:16.907 11:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2261877 00:05:16.907 11:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:16.907 11:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.907 11:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2261877 00:05:16.907 11:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.907 11:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.907 11:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2261877' 00:05:16.907 killing process with pid 2261877 00:05:16.907 11:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2261877 00:05:16.907 11:17:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2261877 00:05:18.818 00:05:18.818 real 0m8.176s 00:05:18.818 user 0m8.316s 00:05:18.818 sys 0m1.171s 00:05:18.818 11:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.818 11:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.818 ************************************ 00:05:18.818 END TEST non_locking_app_on_locked_coremask 00:05:18.818 ************************************ 00:05:18.818 11:17:17 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:18.818 11:17:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.818 11:17:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.818 11:17:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.818 ************************************ 00:05:18.818 START TEST locking_app_on_unlocked_coremask 00:05:18.818 ************************************ 00:05:18.818 11:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:18.818 11:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2263468 00:05:18.818 11:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2263468 /var/tmp/spdk.sock 00:05:18.818 11:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:18.818 11:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2263468 ']' 00:05:18.818 11:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.818 11:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.818 11:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.818 11:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.818 11:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.818 [2024-12-07 11:17:17.993165] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:18.818 [2024-12-07 11:17:17.993278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2263468 ] 00:05:18.818 [2024-12-07 11:17:18.137218] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.818 [2024-12-07 11:17:18.137271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.077 [2024-12-07 11:17:18.236301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.647 11:17:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.647 11:17:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:19.647 11:17:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2263485 00:05:19.647 11:17:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2263485 /var/tmp/spdk2.sock 00:05:19.647 11:17:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2263485 ']' 00:05:19.647 11:17:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:19.647 11:17:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.647 11:17:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.647 11:17:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.647 11:17:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.647 11:17:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.647 [2024-12-07 11:17:18.968823] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:19.647 [2024-12-07 11:17:18.968928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2263485 ] 00:05:19.908 [2024-12-07 11:17:19.156899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.168 [2024-12-07 11:17:19.350857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.549 11:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.549 11:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:21.549 11:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2263485 00:05:21.549 11:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2263485 00:05:21.549 11:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.809 lslocks: write error 00:05:21.809 11:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2263468 00:05:21.809 11:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2263468 ']' 00:05:21.809 11:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2263468 00:05:21.809 11:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.068 11:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.068 11:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2263468 00:05:22.068 11:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.068 11:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.068 11:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2263468' 00:05:22.068 killing process with pid 2263468 00:05:22.068 11:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2263468 00:05:22.068 11:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2263468 00:05:25.368 11:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2263485 00:05:25.368 11:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2263485 ']' 00:05:25.368 11:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2263485 00:05:25.368 11:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.368 11:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.368 11:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2263485 00:05:25.368 11:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.368 11:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.368 11:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2263485' 00:05:25.368 killing process with pid 2263485 00:05:25.368 11:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2263485 00:05:25.368 11:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2263485 00:05:26.826 00:05:26.826 real 0m8.251s 00:05:26.826 user 0m8.359s 00:05:26.826 sys 0m1.207s 00:05:26.826 11:17:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.826 11:17:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.826 ************************************ 00:05:26.826 END TEST locking_app_on_unlocked_coremask 00:05:26.826 ************************************ 00:05:27.135 11:17:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:27.135 11:17:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.135 11:17:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.135 11:17:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.135 ************************************ 00:05:27.135 START TEST locking_app_on_locked_coremask 00:05:27.135 ************************************ 00:05:27.135 11:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:27.135 11:17:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2265118 00:05:27.135 11:17:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2265118 /var/tmp/spdk.sock 00:05:27.135 11:17:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.135 11:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2265118 ']' 00:05:27.135 11:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.136 11:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.136 11:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.136 11:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.136 11:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.136 [2024-12-07 11:17:26.329839] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:27.136 [2024-12-07 11:17:26.329960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265118 ] 00:05:27.136 [2024-12-07 11:17:26.465293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.395 [2024-12-07 11:17:26.561514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2265210 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2265210 /var/tmp/spdk2.sock 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2265210 /var/tmp/spdk2.sock 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2265210 /var/tmp/spdk2.sock 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2265210 ']' 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.964 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.964 [2024-12-07 11:17:27.314365] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:27.964 [2024-12-07 11:17:27.314475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265210 ] 00:05:28.224 [2024-12-07 11:17:27.500925] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2265118 has claimed it. 00:05:28.224 [2024-12-07 11:17:27.500985] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:28.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2265210) - No such process 00:05:28.796 ERROR: process (pid: 2265210) is no longer running 00:05:28.796 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.796 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:28.796 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:28.796 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:28.796 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:28.796 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:28.796 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2265118 00:05:28.796 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2265118 00:05:28.796 11:17:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.057 lslocks: write error 00:05:29.057 11:17:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2265118 00:05:29.057 11:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2265118 ']' 00:05:29.057 11:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2265118 00:05:29.057 11:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:29.057 11:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.057 11:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2265118 00:05:29.317 11:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.317 11:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.317 11:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2265118' 00:05:29.317 killing process with pid 2265118 00:05:29.317 11:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2265118 00:05:29.317 11:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2265118 00:05:30.699 00:05:30.699 real 0m3.822s 00:05:30.699 user 0m3.950s 00:05:30.699 sys 0m0.888s 00:05:30.699 11:17:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.699 11:17:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.699 ************************************ 00:05:30.699 END TEST locking_app_on_locked_coremask 00:05:30.699 ************************************ 00:05:30.959 11:17:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:30.959 11:17:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.959 11:17:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.959 11:17:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.959 ************************************ 00:05:30.959 START TEST locking_overlapped_coremask 00:05:30.959 ************************************ 00:05:30.959 11:17:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:30.959 11:17:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2265906 00:05:30.959 11:17:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2265906 /var/tmp/spdk.sock 00:05:30.959 11:17:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:30.959 11:17:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2265906 ']' 00:05:30.959 11:17:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.959 11:17:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.959 11:17:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.959 11:17:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.959 11:17:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.959 [2024-12-07 11:17:30.212736] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:30.959 [2024-12-07 11:17:30.212855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265906 ] 00:05:31.220 [2024-12-07 11:17:30.351162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.220 [2024-12-07 11:17:30.453281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.220 [2024-12-07 11:17:30.453436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.220 [2024-12-07 11:17:30.453436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2266064 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2266064 /var/tmp/spdk2.sock 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2266064 /var/tmp/spdk2.sock 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2266064 /var/tmp/spdk2.sock 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2266064 ']' 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.791 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.051 [2024-12-07 11:17:31.195483] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:32.051 [2024-12-07 11:17:31.195581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266064 ] 00:05:32.051 [2024-12-07 11:17:31.346903] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2265906 has claimed it. 00:05:32.051 [2024-12-07 11:17:31.346951] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:32.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2266064) - No such process 00:05:32.626 ERROR: process (pid: 2266064) is no longer running 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2265906 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2265906 ']' 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2265906 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2265906 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2265906' 00:05:32.626 killing process with pid 2265906 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2265906 00:05:32.626 11:17:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2265906 00:05:34.541 00:05:34.541 real 0m3.340s 00:05:34.541 user 0m9.024s 00:05:34.541 sys 0m0.607s 00:05:34.541 11:17:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.541 11:17:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.541 ************************************ 00:05:34.541 END TEST locking_overlapped_coremask 00:05:34.541 ************************************ 00:05:34.541 11:17:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:34.541 11:17:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.541 11:17:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.541 11:17:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.541 ************************************ 00:05:34.541 START TEST locking_overlapped_coremask_via_rpc 00:05:34.541 ************************************ 00:05:34.541 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:34.541 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2266618 00:05:34.541 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2266618 /var/tmp/spdk.sock 00:05:34.541 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:34.541 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2266618 ']' 00:05:34.541 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.541 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.541 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.541 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.541 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.541 [2024-12-07 11:17:33.629937] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:34.541 [2024-12-07 11:17:33.630075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266618 ] 00:05:34.541 [2024-12-07 11:17:33.771048] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.541 [2024-12-07 11:17:33.771101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.541 [2024-12-07 11:17:33.872964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.541 [2024-12-07 11:17:33.873053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.541 [2024-12-07 11:17:33.873072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.480 11:17:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.480 11:17:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:35.480 11:17:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2266784 00:05:35.480 11:17:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2266784 /var/tmp/spdk2.sock 00:05:35.480 11:17:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2266784 ']' 00:05:35.480 11:17:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:35.480 11:17:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.480 11:17:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.480 11:17:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.480 11:17:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.480 11:17:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.480 [2024-12-07 11:17:34.612475] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:35.481 [2024-12-07 11:17:34.612585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266784 ] 00:05:35.481 [2024-12-07 11:17:34.764779] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.481 [2024-12-07 11:17:34.764820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.739 [2024-12-07 11:17:34.922917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.739 [2024-12-07 11:17:34.923018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.739 [2024-12-07 11:17:34.923060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.679 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.679 [2024-12-07 11:17:35.879128] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2266618 has claimed it. 00:05:36.679 request: 00:05:36.679 { 00:05:36.679 "method": "framework_enable_cpumask_locks", 00:05:36.679 "req_id": 1 00:05:36.679 } 00:05:36.679 Got JSON-RPC error response 00:05:36.680 response: 00:05:36.680 { 00:05:36.680 "code": -32603, 00:05:36.680 "message": "Failed to claim CPU core: 2" 00:05:36.680 } 00:05:36.680 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:36.680 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:36.680 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.680 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.680 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.680 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2266618 /var/tmp/spdk.sock 00:05:36.680 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2266618 ']' 00:05:36.680 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.680 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.680 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.680 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.680 11:17:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2266784 /var/tmp/spdk2.sock 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2266784 ']' 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:36.953 00:05:36.953 real 0m2.727s 00:05:36.953 user 0m0.885s 00:05:36.953 sys 0m0.156s 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.953 11:17:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.953 ************************************ 00:05:36.953 END TEST locking_overlapped_coremask_via_rpc 00:05:36.953 ************************************ 00:05:36.953 11:17:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:36.953 11:17:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2266618 ]] 00:05:36.953 11:17:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2266618 00:05:36.953 11:17:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2266618 ']' 00:05:36.953 11:17:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2266618 00:05:36.953 11:17:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:36.953 11:17:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.213 11:17:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2266618 00:05:37.213 11:17:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.213 11:17:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.213 11:17:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2266618' 00:05:37.213 killing process with pid 2266618 00:05:37.213 11:17:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2266618 00:05:37.213 11:17:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2266618 00:05:39.119 11:17:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2266784 ]] 00:05:39.119 11:17:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2266784 00:05:39.119 11:17:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2266784 ']' 00:05:39.119 11:17:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2266784 00:05:39.119 11:17:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:39.119 11:17:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.119 11:17:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2266784 00:05:39.119 11:17:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:39.119 11:17:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:39.119 11:17:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2266784' 00:05:39.119 killing process with pid 2266784 00:05:39.119 11:17:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2266784 00:05:39.119 11:17:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2266784 00:05:40.058 11:17:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:40.058 11:17:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:40.058 11:17:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2266618 ]] 00:05:40.058 11:17:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2266618 00:05:40.058 11:17:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2266618 ']' 00:05:40.058 11:17:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2266618 00:05:40.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2266618) - No such process 00:05:40.058 11:17:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2266618 is not found' 00:05:40.058 Process with pid 2266618 is not found 00:05:40.058 11:17:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2266784 ]] 00:05:40.058 11:17:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2266784 00:05:40.058 11:17:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2266784 ']' 00:05:40.058 11:17:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2266784 00:05:40.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2266784) - No such process 00:05:40.058 11:17:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2266784 is not found' 00:05:40.058 Process with pid 2266784 is not found 00:05:40.058 11:17:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:40.058 00:05:40.058 real 0m36.269s 00:05:40.058 user 0m58.590s 00:05:40.058 sys 0m6.585s 00:05:40.058 11:17:39 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.058 11:17:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.058 ************************************ 00:05:40.058 END TEST cpu_locks 00:05:40.058 ************************************ 00:05:40.058 00:05:40.058 real 1m5.213s 00:05:40.058 user 1m56.893s 00:05:40.058 sys 0m10.353s 00:05:40.058 11:17:39 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.058 11:17:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.058 ************************************ 00:05:40.058 END TEST event 00:05:40.058 ************************************ 00:05:40.058 11:17:39 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:40.058 11:17:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.058 11:17:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.058 11:17:39 -- common/autotest_common.sh@10 -- # set +x 00:05:40.058 ************************************ 00:05:40.058 START TEST thread 00:05:40.058 ************************************ 00:05:40.058 11:17:39 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:40.318 * Looking for test storage... 00:05:40.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:40.318 11:17:39 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.318 11:17:39 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.318 11:17:39 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.318 11:17:39 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.318 11:17:39 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.318 11:17:39 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.318 11:17:39 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.318 11:17:39 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.318 11:17:39 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.318 11:17:39 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.318 11:17:39 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.318 11:17:39 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.318 11:17:39 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.318 11:17:39 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.318 11:17:39 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.318 11:17:39 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:40.318 11:17:39 thread -- scripts/common.sh@345 -- # : 1 00:05:40.318 11:17:39 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.318 11:17:39 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.318 11:17:39 thread -- scripts/common.sh@365 -- # decimal 1 00:05:40.318 11:17:39 thread -- scripts/common.sh@353 -- # local d=1 00:05:40.318 11:17:39 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.318 11:17:39 thread -- scripts/common.sh@355 -- # echo 1 00:05:40.318 11:17:39 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.318 11:17:39 thread -- scripts/common.sh@366 -- # decimal 2 00:05:40.318 11:17:39 thread -- scripts/common.sh@353 -- # local d=2 00:05:40.318 11:17:39 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.318 11:17:39 thread -- scripts/common.sh@355 -- # echo 2 00:05:40.318 11:17:39 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.318 11:17:39 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.318 11:17:39 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.318 11:17:39 thread -- scripts/common.sh@368 -- # return 0 00:05:40.318 11:17:39 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.318 11:17:39 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.318 --rc genhtml_branch_coverage=1 00:05:40.318 --rc genhtml_function_coverage=1 00:05:40.318 --rc genhtml_legend=1 00:05:40.318 --rc geninfo_all_blocks=1 00:05:40.318 --rc geninfo_unexecuted_blocks=1 00:05:40.318 00:05:40.318 ' 00:05:40.318 11:17:39 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.318 --rc genhtml_branch_coverage=1 00:05:40.318 --rc genhtml_function_coverage=1 00:05:40.318 --rc genhtml_legend=1 00:05:40.318 --rc geninfo_all_blocks=1 00:05:40.318 --rc geninfo_unexecuted_blocks=1 00:05:40.318 00:05:40.318 ' 00:05:40.318 11:17:39 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.318 --rc genhtml_branch_coverage=1 00:05:40.318 --rc genhtml_function_coverage=1 00:05:40.318 --rc genhtml_legend=1 00:05:40.318 --rc geninfo_all_blocks=1 00:05:40.318 --rc geninfo_unexecuted_blocks=1 00:05:40.318 00:05:40.318 ' 00:05:40.318 11:17:39 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.318 --rc genhtml_branch_coverage=1 00:05:40.318 --rc genhtml_function_coverage=1 00:05:40.318 --rc genhtml_legend=1 00:05:40.318 --rc geninfo_all_blocks=1 00:05:40.318 --rc geninfo_unexecuted_blocks=1 00:05:40.318 00:05:40.318 ' 00:05:40.318 11:17:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:40.318 11:17:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:40.318 11:17:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.318 11:17:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.318 ************************************ 00:05:40.318 START TEST thread_poller_perf 00:05:40.318 ************************************ 00:05:40.318 11:17:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:40.318 [2024-12-07 11:17:39.633728] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:40.318 [2024-12-07 11:17:39.633837] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267863 ] 00:05:40.578 [2024-12-07 11:17:39.773928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.578 [2024-12-07 11:17:39.876153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.578 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:41.961 [2024-12-07T10:17:41.315Z] ====================================== 00:05:41.961 [2024-12-07T10:17:41.315Z] busy:2412673588 (cyc) 00:05:41.961 [2024-12-07T10:17:41.315Z] total_run_count: 284000 00:05:41.961 [2024-12-07T10:17:41.315Z] tsc_hz: 2400000000 (cyc) 00:05:41.961 [2024-12-07T10:17:41.315Z] ====================================== 00:05:41.961 [2024-12-07T10:17:41.315Z] poller_cost: 8495 (cyc), 3539 (nsec) 00:05:41.961 00:05:41.961 real 0m1.464s 00:05:41.961 user 0m1.318s 00:05:41.961 sys 0m0.139s 00:05:41.961 11:17:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.961 11:17:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.961 ************************************ 00:05:41.961 END TEST thread_poller_perf 00:05:41.961 ************************************ 00:05:41.961 11:17:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.961 11:17:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:41.961 11:17:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.961 11:17:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.961 ************************************ 00:05:41.961 START TEST thread_poller_perf 00:05:41.961 ************************************ 00:05:41.961 11:17:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.961 [2024-12-07 11:17:41.174903] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:41.961 [2024-12-07 11:17:41.175009] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268128 ] 00:05:41.961 [2024-12-07 11:17:41.311866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.222 [2024-12-07 11:17:41.414422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.222 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:43.604 [2024-12-07T10:17:42.958Z] ====================================== 00:05:43.604 [2024-12-07T10:17:42.958Z] busy:2402982418 (cyc) 00:05:43.604 [2024-12-07T10:17:42.958Z] total_run_count: 3376000 00:05:43.604 [2024-12-07T10:17:42.958Z] tsc_hz: 2400000000 (cyc) 00:05:43.604 [2024-12-07T10:17:42.958Z] ====================================== 00:05:43.604 [2024-12-07T10:17:42.958Z] poller_cost: 711 (cyc), 296 (nsec) 00:05:43.604 00:05:43.604 real 0m1.451s 00:05:43.604 user 0m1.313s 00:05:43.604 sys 0m0.133s 00:05:43.604 11:17:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.604 11:17:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.604 ************************************ 00:05:43.604 END TEST thread_poller_perf 00:05:43.604 ************************************ 00:05:43.604 11:17:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:43.604 00:05:43.604 real 0m3.276s 00:05:43.604 user 0m2.811s 00:05:43.604 sys 0m0.475s 00:05:43.604 11:17:42 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.604 11:17:42 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.604 ************************************ 00:05:43.604 END TEST thread 00:05:43.604 ************************************ 00:05:43.604 11:17:42 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:43.605 11:17:42 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:43.605 11:17:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.605 11:17:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.605 11:17:42 -- common/autotest_common.sh@10 -- # set +x 00:05:43.605 ************************************ 00:05:43.605 START TEST app_cmdline 00:05:43.605 ************************************ 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:43.605 * Looking for test storage... 00:05:43.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.605 11:17:42 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.605 --rc genhtml_branch_coverage=1 00:05:43.605 --rc genhtml_function_coverage=1 00:05:43.605 --rc genhtml_legend=1 00:05:43.605 --rc geninfo_all_blocks=1 00:05:43.605 --rc geninfo_unexecuted_blocks=1 00:05:43.605 00:05:43.605 ' 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.605 --rc genhtml_branch_coverage=1 00:05:43.605 --rc genhtml_function_coverage=1 00:05:43.605 --rc genhtml_legend=1 00:05:43.605 --rc geninfo_all_blocks=1 00:05:43.605 --rc geninfo_unexecuted_blocks=1 00:05:43.605 00:05:43.605 ' 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.605 --rc genhtml_branch_coverage=1 00:05:43.605 --rc genhtml_function_coverage=1 00:05:43.605 --rc genhtml_legend=1 00:05:43.605 --rc geninfo_all_blocks=1 00:05:43.605 --rc geninfo_unexecuted_blocks=1 00:05:43.605 00:05:43.605 ' 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.605 --rc genhtml_branch_coverage=1 00:05:43.605 --rc genhtml_function_coverage=1 00:05:43.605 --rc genhtml_legend=1 00:05:43.605 --rc geninfo_all_blocks=1 00:05:43.605 --rc geninfo_unexecuted_blocks=1 00:05:43.605 00:05:43.605 ' 00:05:43.605 11:17:42 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:43.605 11:17:42 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2268552 00:05:43.605 11:17:42 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2268552 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2268552 ']' 00:05:43.605 11:17:42 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.605 11:17:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:43.865 [2024-12-07 11:17:43.006769] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:43.865 [2024-12-07 11:17:43.006909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268552 ] 00:05:43.865 [2024-12-07 11:17:43.148216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.126 [2024-12-07 11:17:43.247999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.695 11:17:43 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.695 11:17:43 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:44.695 11:17:43 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:44.695 { 00:05:44.695 "version": "SPDK v25.01-pre git sha1 a2f5e1c2d", 00:05:44.695 "fields": { 00:05:44.695 "major": 25, 00:05:44.695 "minor": 1, 00:05:44.695 "patch": 0, 00:05:44.695 "suffix": "-pre", 00:05:44.695 "commit": "a2f5e1c2d" 00:05:44.695 } 00:05:44.695 } 00:05:44.956 11:17:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:44.956 11:17:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:44.956 11:17:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:44.956 11:17:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:44.956 11:17:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:44.956 11:17:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.956 11:17:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.956 11:17:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:44.956 11:17:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:44.956 11:17:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.956 request: 00:05:44.956 { 00:05:44.956 "method": "env_dpdk_get_mem_stats", 00:05:44.956 "req_id": 1 00:05:44.956 } 00:05:44.956 Got JSON-RPC error response 00:05:44.956 response: 00:05:44.956 { 00:05:44.956 "code": -32601, 00:05:44.956 "message": "Method not found" 00:05:44.956 } 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.956 11:17:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2268552 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2268552 ']' 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2268552 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.956 11:17:44 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2268552 00:05:45.217 11:17:44 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.217 11:17:44 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.217 11:17:44 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2268552' 00:05:45.217 killing process with pid 2268552 00:05:45.217 11:17:44 app_cmdline -- common/autotest_common.sh@973 -- # kill 2268552 00:05:45.217 11:17:44 app_cmdline -- common/autotest_common.sh@978 -- # wait 2268552 00:05:47.129 00:05:47.129 real 0m3.259s 00:05:47.129 user 0m3.459s 00:05:47.129 sys 0m0.598s 00:05:47.129 11:17:45 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.129 11:17:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:47.129 ************************************ 00:05:47.129 END TEST app_cmdline 00:05:47.129 ************************************ 00:05:47.129 11:17:45 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:47.129 11:17:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.129 11:17:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.129 11:17:45 -- common/autotest_common.sh@10 -- # set +x 00:05:47.129 ************************************ 00:05:47.129 START TEST version 00:05:47.129 ************************************ 00:05:47.129 11:17:46 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:47.129 * Looking for test storage... 00:05:47.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:47.129 11:17:46 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.129 11:17:46 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.129 11:17:46 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.129 11:17:46 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.129 11:17:46 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.129 11:17:46 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.129 11:17:46 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.129 11:17:46 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.129 11:17:46 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.129 11:17:46 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.129 11:17:46 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.129 11:17:46 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.129 11:17:46 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.129 11:17:46 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.129 11:17:46 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.129 11:17:46 version -- scripts/common.sh@344 -- # case "$op" in 00:05:47.129 11:17:46 version -- scripts/common.sh@345 -- # : 1 00:05:47.129 11:17:46 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.129 11:17:46 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.129 11:17:46 version -- scripts/common.sh@365 -- # decimal 1 00:05:47.129 11:17:46 version -- scripts/common.sh@353 -- # local d=1 00:05:47.129 11:17:46 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.129 11:17:46 version -- scripts/common.sh@355 -- # echo 1 00:05:47.129 11:17:46 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.129 11:17:46 version -- scripts/common.sh@366 -- # decimal 2 00:05:47.129 11:17:46 version -- scripts/common.sh@353 -- # local d=2 00:05:47.129 11:17:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.129 11:17:46 version -- scripts/common.sh@355 -- # echo 2 00:05:47.129 11:17:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.129 11:17:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.129 11:17:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.129 11:17:46 version -- scripts/common.sh@368 -- # return 0 00:05:47.129 11:17:46 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.129 11:17:46 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.129 --rc genhtml_branch_coverage=1 00:05:47.129 --rc genhtml_function_coverage=1 00:05:47.129 --rc genhtml_legend=1 00:05:47.129 --rc geninfo_all_blocks=1 00:05:47.129 --rc geninfo_unexecuted_blocks=1 00:05:47.129 00:05:47.129 ' 00:05:47.129 11:17:46 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.129 --rc genhtml_branch_coverage=1 00:05:47.129 --rc genhtml_function_coverage=1 00:05:47.129 --rc genhtml_legend=1 00:05:47.130 --rc geninfo_all_blocks=1 00:05:47.130 --rc geninfo_unexecuted_blocks=1 00:05:47.130 00:05:47.130 ' 00:05:47.130 11:17:46 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.130 --rc genhtml_branch_coverage=1 00:05:47.130 --rc genhtml_function_coverage=1 00:05:47.130 --rc genhtml_legend=1 00:05:47.130 --rc geninfo_all_blocks=1 00:05:47.130 --rc geninfo_unexecuted_blocks=1 00:05:47.130 00:05:47.130 ' 00:05:47.130 11:17:46 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.130 --rc genhtml_branch_coverage=1 00:05:47.130 --rc genhtml_function_coverage=1 00:05:47.130 --rc genhtml_legend=1 00:05:47.130 --rc geninfo_all_blocks=1 00:05:47.130 --rc geninfo_unexecuted_blocks=1 00:05:47.130 00:05:47.130 ' 00:05:47.130 11:17:46 version -- app/version.sh@17 -- # get_header_version major 00:05:47.130 11:17:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:47.130 11:17:46 version -- app/version.sh@14 -- # cut -f2 00:05:47.130 11:17:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.130 11:17:46 version -- app/version.sh@17 -- # major=25 00:05:47.130 11:17:46 version -- app/version.sh@18 -- # get_header_version minor 00:05:47.130 11:17:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:47.130 11:17:46 version -- app/version.sh@14 -- # cut -f2 00:05:47.130 11:17:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.130 11:17:46 version -- app/version.sh@18 -- # minor=1 00:05:47.130 11:17:46 version -- app/version.sh@19 -- # get_header_version patch 00:05:47.130 11:17:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:47.130 11:17:46 version -- app/version.sh@14 -- # cut -f2 00:05:47.130 11:17:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.130 11:17:46 version -- app/version.sh@19 -- # patch=0 00:05:47.130 11:17:46 version -- app/version.sh@20 -- # get_header_version suffix 00:05:47.130 11:17:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:47.130 11:17:46 version -- app/version.sh@14 -- # cut -f2 00:05:47.130 11:17:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.130 11:17:46 version -- app/version.sh@20 -- # suffix=-pre 00:05:47.130 11:17:46 version -- app/version.sh@22 -- # version=25.1 00:05:47.130 11:17:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:47.130 11:17:46 version -- app/version.sh@28 -- # version=25.1rc0 00:05:47.130 11:17:46 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:47.130 11:17:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:47.130 11:17:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:47.130 11:17:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:47.130 00:05:47.130 real 0m0.258s 00:05:47.130 user 0m0.150s 00:05:47.130 sys 0m0.152s 00:05:47.130 11:17:46 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.130 11:17:46 version -- common/autotest_common.sh@10 -- # set +x 00:05:47.130 ************************************ 00:05:47.130 END TEST version 00:05:47.130 ************************************ 00:05:47.130 11:17:46 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:47.130 11:17:46 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:47.130 11:17:46 -- spdk/autotest.sh@194 -- # uname -s 00:05:47.130 11:17:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:47.130 11:17:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:47.130 11:17:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:47.130 11:17:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:47.130 11:17:46 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:47.130 11:17:46 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:47.130 11:17:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:47.130 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:05:47.130 11:17:46 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:47.130 11:17:46 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:47.130 11:17:46 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:47.130 11:17:46 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:47.130 11:17:46 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:47.130 11:17:46 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:47.130 11:17:46 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:47.130 11:17:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:47.130 11:17:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.130 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:05:47.130 ************************************ 00:05:47.130 START TEST nvmf_tcp 00:05:47.130 ************************************ 00:05:47.130 11:17:46 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:47.391 * Looking for test storage... 00:05:47.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:47.391 11:17:46 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.391 11:17:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.391 11:17:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.391 11:17:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.391 11:17:46 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:47.391 11:17:46 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.391 11:17:46 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.391 --rc genhtml_branch_coverage=1 00:05:47.391 --rc genhtml_function_coverage=1 00:05:47.391 --rc genhtml_legend=1 00:05:47.391 --rc geninfo_all_blocks=1 00:05:47.391 --rc geninfo_unexecuted_blocks=1 00:05:47.391 00:05:47.391 ' 00:05:47.391 11:17:46 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.391 --rc genhtml_branch_coverage=1 00:05:47.391 --rc genhtml_function_coverage=1 00:05:47.391 --rc genhtml_legend=1 00:05:47.391 --rc geninfo_all_blocks=1 00:05:47.391 --rc geninfo_unexecuted_blocks=1 00:05:47.391 00:05:47.391 ' 00:05:47.391 11:17:46 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.391 --rc genhtml_branch_coverage=1 00:05:47.391 --rc genhtml_function_coverage=1 00:05:47.391 --rc genhtml_legend=1 00:05:47.391 --rc geninfo_all_blocks=1 00:05:47.391 --rc geninfo_unexecuted_blocks=1 00:05:47.391 00:05:47.391 ' 00:05:47.391 11:17:46 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.391 --rc genhtml_branch_coverage=1 00:05:47.391 --rc genhtml_function_coverage=1 00:05:47.391 --rc genhtml_legend=1 00:05:47.391 --rc geninfo_all_blocks=1 00:05:47.391 --rc geninfo_unexecuted_blocks=1 00:05:47.391 00:05:47.391 ' 00:05:47.391 11:17:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:47.391 11:17:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:47.391 11:17:46 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:47.391 11:17:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:47.391 11:17:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.391 11:17:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.391 ************************************ 00:05:47.391 START TEST nvmf_target_core 00:05:47.391 ************************************ 00:05:47.391 11:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:47.654 * Looking for test storage... 00:05:47.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.654 --rc genhtml_branch_coverage=1 00:05:47.654 --rc genhtml_function_coverage=1 00:05:47.654 --rc genhtml_legend=1 00:05:47.654 --rc geninfo_all_blocks=1 00:05:47.654 --rc geninfo_unexecuted_blocks=1 00:05:47.654 00:05:47.654 ' 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.654 --rc genhtml_branch_coverage=1 00:05:47.654 --rc genhtml_function_coverage=1 00:05:47.654 --rc genhtml_legend=1 00:05:47.654 --rc geninfo_all_blocks=1 00:05:47.654 --rc geninfo_unexecuted_blocks=1 00:05:47.654 00:05:47.654 ' 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.654 --rc genhtml_branch_coverage=1 00:05:47.654 --rc genhtml_function_coverage=1 00:05:47.654 --rc genhtml_legend=1 00:05:47.654 --rc geninfo_all_blocks=1 00:05:47.654 --rc geninfo_unexecuted_blocks=1 00:05:47.654 00:05:47.654 ' 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.654 --rc genhtml_branch_coverage=1 00:05:47.654 --rc genhtml_function_coverage=1 00:05:47.654 --rc genhtml_legend=1 00:05:47.654 --rc geninfo_all_blocks=1 00:05:47.654 --rc geninfo_unexecuted_blocks=1 00:05:47.654 00:05:47.654 ' 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.654 11:17:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:47.655 ************************************ 00:05:47.655 START TEST nvmf_abort 00:05:47.655 ************************************ 00:05:47.655 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:47.916 * Looking for test storage... 00:05:47.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.916 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.916 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.916 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.916 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.916 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.916 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.916 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.916 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.916 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.916 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.916 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.917 --rc genhtml_branch_coverage=1 00:05:47.917 --rc genhtml_function_coverage=1 00:05:47.917 --rc genhtml_legend=1 00:05:47.917 --rc geninfo_all_blocks=1 00:05:47.917 --rc geninfo_unexecuted_blocks=1 00:05:47.917 00:05:47.917 ' 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.917 --rc genhtml_branch_coverage=1 00:05:47.917 --rc genhtml_function_coverage=1 00:05:47.917 --rc genhtml_legend=1 00:05:47.917 --rc geninfo_all_blocks=1 00:05:47.917 --rc geninfo_unexecuted_blocks=1 00:05:47.917 00:05:47.917 ' 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.917 --rc genhtml_branch_coverage=1 00:05:47.917 --rc genhtml_function_coverage=1 00:05:47.917 --rc genhtml_legend=1 00:05:47.917 --rc geninfo_all_blocks=1 00:05:47.917 --rc geninfo_unexecuted_blocks=1 00:05:47.917 00:05:47.917 ' 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.917 --rc genhtml_branch_coverage=1 00:05:47.917 --rc genhtml_function_coverage=1 00:05:47.917 --rc genhtml_legend=1 00:05:47.917 --rc geninfo_all_blocks=1 00:05:47.917 --rc geninfo_unexecuted_blocks=1 00:05:47.917 00:05:47.917 ' 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:47.917 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:56.060 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:56.060 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:56.061 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:56.061 Found net devices under 0000:31:00.0: cvl_0_0 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:56.061 Found net devices under 0000:31:00.1: cvl_0_1 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:56.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:56.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:05:56.061 00:05:56.061 --- 10.0.0.2 ping statistics --- 00:05:56.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:56.061 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:56.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:56.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:05:56.061 00:05:56.061 --- 10.0.0.1 ping statistics --- 00:05:56.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:56.061 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2273391 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2273391 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2273391 ']' 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.061 11:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.061 [2024-12-07 11:17:54.636181] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:56.061 [2024-12-07 11:17:54.636307] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:56.061 [2024-12-07 11:17:54.807888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.061 [2024-12-07 11:17:54.933939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:56.061 [2024-12-07 11:17:54.934005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:56.061 [2024-12-07 11:17:54.934031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:56.061 [2024-12-07 11:17:54.934044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:56.061 [2024-12-07 11:17:54.934054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:56.061 [2024-12-07 11:17:54.936707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.061 [2024-12-07 11:17:54.936826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.061 [2024-12-07 11:17:54.936851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.061 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.061 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:56.062 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:56.062 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.062 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.322 [2024-12-07 11:17:55.456065] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.322 Malloc0 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.322 Delay0 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.322 [2024-12-07 11:17:55.578239] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.322 11:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:56.582 [2024-12-07 11:17:55.749575] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:58.495 Initializing NVMe Controllers 00:05:58.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:58.495 controller IO queue size 128 less than required 00:05:58.495 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:58.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:58.495 Initialization complete. Launching workers. 00:05:58.495 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 27435 00:05:58.495 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27496, failed to submit 66 00:05:58.495 success 27435, unsuccessful 61, failed 0 00:05:58.495 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:58.495 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.495 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:58.495 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.495 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:58.495 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:58.495 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:58.495 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:58.495 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:58.495 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:58.495 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:58.495 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:58.495 rmmod nvme_tcp 00:05:58.758 rmmod nvme_fabrics 00:05:58.758 rmmod nvme_keyring 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2273391 ']' 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2273391 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2273391 ']' 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2273391 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273391 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273391' 00:05:58.758 killing process with pid 2273391 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2273391 00:05:58.758 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2273391 00:05:59.701 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:59.701 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:59.701 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:59.701 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:59.701 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:59.701 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:59.701 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:59.701 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:59.701 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:59.701 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.701 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:59.701 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.618 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:01.618 00:06:01.618 real 0m13.971s 00:06:01.618 user 0m15.271s 00:06:01.618 sys 0m6.507s 00:06:01.618 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.618 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.618 ************************************ 00:06:01.618 END TEST nvmf_abort 00:06:01.618 ************************************ 00:06:01.618 11:18:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:01.618 11:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:01.618 11:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.618 11:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:01.880 ************************************ 00:06:01.880 START TEST nvmf_ns_hotplug_stress 00:06:01.880 ************************************ 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:01.880 * Looking for test storage... 00:06:01.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:01.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.880 --rc genhtml_branch_coverage=1 00:06:01.880 --rc genhtml_function_coverage=1 00:06:01.880 --rc genhtml_legend=1 00:06:01.880 --rc geninfo_all_blocks=1 00:06:01.880 --rc geninfo_unexecuted_blocks=1 00:06:01.880 00:06:01.880 ' 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:01.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.880 --rc genhtml_branch_coverage=1 00:06:01.880 --rc genhtml_function_coverage=1 00:06:01.880 --rc genhtml_legend=1 00:06:01.880 --rc geninfo_all_blocks=1 00:06:01.880 --rc geninfo_unexecuted_blocks=1 00:06:01.880 00:06:01.880 ' 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:01.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.880 --rc genhtml_branch_coverage=1 00:06:01.880 --rc genhtml_function_coverage=1 00:06:01.880 --rc genhtml_legend=1 00:06:01.880 --rc geninfo_all_blocks=1 00:06:01.880 --rc geninfo_unexecuted_blocks=1 00:06:01.880 00:06:01.880 ' 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:01.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.880 --rc genhtml_branch_coverage=1 00:06:01.880 --rc genhtml_function_coverage=1 00:06:01.880 --rc genhtml_legend=1 00:06:01.880 --rc geninfo_all_blocks=1 00:06:01.880 --rc geninfo_unexecuted_blocks=1 00:06:01.880 00:06:01.880 ' 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.880 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.881 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.881 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.881 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.881 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.881 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:01.881 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:01.881 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.881 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.881 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:01.881 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.881 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.881 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:02.142 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:10.302 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:10.302 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:10.302 Found net devices under 0000:31:00.0: cvl_0_0 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:10.302 Found net devices under 0000:31:00.1: cvl_0_1 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:10.302 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:10.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:10.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:06:10.303 00:06:10.303 --- 10.0.0.2 ping statistics --- 00:06:10.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.303 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:10.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:10.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:06:10.303 00:06:10.303 --- 10.0.0.1 ping statistics --- 00:06:10.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.303 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2278797 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2278797 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2278797 ']' 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.303 11:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:10.303 [2024-12-07 11:18:08.714493] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:10.303 [2024-12-07 11:18:08.714621] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:10.303 [2024-12-07 11:18:08.882853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.303 [2024-12-07 11:18:09.013426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:10.303 [2024-12-07 11:18:09.013494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:10.303 [2024-12-07 11:18:09.013507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:10.303 [2024-12-07 11:18:09.013520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:10.303 [2024-12-07 11:18:09.013529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:10.303 [2024-12-07 11:18:09.016212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.303 [2024-12-07 11:18:09.016521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.303 [2024-12-07 11:18:09.016538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.303 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.303 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:10.303 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:10.303 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.303 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:10.303 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:10.303 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:10.303 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:10.563 [2024-12-07 11:18:09.656873] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.563 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:10.563 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:10.824 [2024-12-07 11:18:10.019884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:10.824 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:11.085 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:11.347 Malloc0 00:06:11.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:11.347 Delay0 00:06:11.347 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.608 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:11.868 NULL1 00:06:11.868 11:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:11.868 11:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2279735 00:06:11.868 11:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:11.868 11:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:11.868 11:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.128 11:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.388 11:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:12.388 11:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:12.388 true 00:06:12.388 11:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:12.388 11:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.649 11:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.911 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:12.911 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:12.911 true 00:06:13.172 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:13.172 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.172 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.433 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:13.433 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:13.693 true 00:06:13.693 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:13.693 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.693 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.954 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:13.954 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:14.215 true 00:06:14.215 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:14.215 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.477 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.477 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:14.477 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:14.739 true 00:06:14.739 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:14.739 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.000 11:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.000 11:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:15.000 11:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:15.260 true 00:06:15.260 11:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:15.260 11:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.521 11:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.521 11:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:15.521 11:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:15.782 true 00:06:15.782 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:15.782 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.043 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.301 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:16.301 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:16.301 true 00:06:16.302 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:16.302 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.561 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.822 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:16.822 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:16.822 true 00:06:16.822 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:16.822 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.081 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.342 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:17.342 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:17.342 true 00:06:17.602 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:17.602 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.602 11:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.864 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:17.864 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:18.125 true 00:06:18.125 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:18.125 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.125 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.387 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:18.387 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:18.679 true 00:06:18.679 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:18.679 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.679 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.982 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:18.982 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:19.266 true 00:06:19.266 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:19.266 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.266 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.527 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:19.527 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:19.789 true 00:06:19.789 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:19.789 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.789 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.050 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:20.050 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:20.310 true 00:06:20.310 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:20.310 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.310 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.570 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:20.570 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:20.830 true 00:06:20.830 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:20.830 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.090 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.090 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:21.090 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:21.348 true 00:06:21.348 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:21.348 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.607 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.607 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:21.607 11:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:21.872 true 00:06:21.872 11:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:21.872 11:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.131 11:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.390 11:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:22.390 11:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:22.390 true 00:06:22.390 11:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:22.390 11:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.649 11:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.908 11:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:22.908 11:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:22.908 true 00:06:22.908 11:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:22.908 11:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.167 11:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.426 11:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:23.426 11:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:23.426 true 00:06:23.684 11:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:23.684 11:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.684 11:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.944 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:23.944 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:24.204 true 00:06:24.204 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:24.204 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.204 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.463 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:24.463 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:24.722 true 00:06:24.722 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:24.722 11:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.983 11:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.983 11:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:24.983 11:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:25.243 true 00:06:25.243 11:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:25.243 11:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.502 11:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.502 11:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:25.502 11:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:25.763 true 00:06:25.763 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:25.763 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.022 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.022 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:26.022 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:26.281 true 00:06:26.281 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:26.281 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.541 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.801 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:26.801 11:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:26.801 true 00:06:26.801 11:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:26.801 11:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.061 11:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.322 11:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:27.322 11:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:27.322 true 00:06:27.582 11:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:27.582 11:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.582 11:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.843 11:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:27.843 11:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:28.103 true 00:06:28.103 11:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:28.103 11:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.103 11:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.363 11:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:28.363 11:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:28.623 true 00:06:28.623 11:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:28.623 11:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.623 11:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.884 11:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:28.884 11:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:29.144 true 00:06:29.144 11:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:29.145 11:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.405 11:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.405 11:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:29.405 11:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:29.666 true 00:06:29.666 11:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:29.666 11:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.928 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.928 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:29.928 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:30.189 true 00:06:30.189 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:30.190 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.451 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.712 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:30.712 11:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:30.712 true 00:06:30.712 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:30.712 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.973 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.234 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:31.234 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:31.234 true 00:06:31.234 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:31.234 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.495 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.755 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:31.755 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:32.017 true 00:06:32.017 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:32.017 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.017 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.279 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:32.279 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:32.540 true 00:06:32.541 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:32.541 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.541 11:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.827 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:32.827 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:33.087 true 00:06:33.087 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:33.087 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.087 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.348 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:33.348 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:33.610 true 00:06:33.610 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:33.610 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.870 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.870 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:33.870 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:34.131 true 00:06:34.131 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:34.131 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.393 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.393 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:34.393 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:34.653 true 00:06:34.653 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:34.653 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.914 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.914 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:34.914 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:35.175 true 00:06:35.176 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:35.176 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.437 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.437 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:35.438 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:35.698 true 00:06:35.698 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:35.698 11:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.958 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.217 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:36.217 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:36.217 true 00:06:36.217 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:36.217 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.475 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.736 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:36.736 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:36.736 true 00:06:36.736 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:36.736 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.994 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.254 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:37.254 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:37.254 true 00:06:37.513 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:37.514 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.514 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.774 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:37.774 11:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:38.047 true 00:06:38.047 11:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:38.047 11:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.047 11:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.307 11:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:38.307 11:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:38.566 true 00:06:38.566 11:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:38.566 11:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.566 11:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.825 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:38.825 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:39.085 true 00:06:39.085 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:39.085 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.345 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.345 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:39.345 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:39.604 true 00:06:39.604 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:39.604 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.865 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.124 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:40.124 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:40.124 true 00:06:40.124 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:40.124 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.385 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.645 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:40.645 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:40.645 true 00:06:40.645 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:40.645 11:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.906 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.165 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:41.165 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:41.165 true 00:06:41.425 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:41.425 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.425 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.685 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:41.685 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:41.945 true 00:06:41.945 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:41.945 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.945 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.204 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:42.204 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:42.204 Initializing NVMe Controllers 00:06:42.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:42.204 Controller IO queue size 128, less than required. 00:06:42.204 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:42.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:42.204 Initialization complete. Launching workers. 00:06:42.204 ======================================================== 00:06:42.204 Latency(us) 00:06:42.204 Device Information : IOPS MiB/s Average min max 00:06:42.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27560.33 13.46 4644.22 1556.90 10171.12 00:06:42.204 ======================================================== 00:06:42.204 Total : 27560.33 13.46 4644.22 1556.90 10171.12 00:06:42.204 00:06:42.464 true 00:06:42.464 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2279735 00:06:42.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2279735) - No such process 00:06:42.464 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2279735 00:06:42.464 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.464 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.723 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:42.723 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:42.723 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:42.723 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.723 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:42.983 null0 00:06:42.983 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.983 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.983 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:42.983 null1 00:06:43.242 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.242 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.242 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:43.242 null2 00:06:43.242 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.242 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.242 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:43.501 null3 00:06:43.501 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.501 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.501 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:43.761 null4 00:06:43.761 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.761 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.761 11:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:43.761 null5 00:06:43.761 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.761 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.761 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:44.022 null6 00:06:44.022 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:44.022 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:44.022 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:44.283 null7 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.283 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2286284 2286286 2286289 2286292 2286295 2286298 2286301 2286304 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.284 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.545 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.805 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.805 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.805 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.805 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.805 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.805 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.805 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.805 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.805 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.805 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.805 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.805 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.805 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.806 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.806 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.806 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.806 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.067 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.328 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.328 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.328 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.328 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.328 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.328 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.328 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.328 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.328 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.328 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.328 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.329 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.590 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.859 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.859 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.859 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.860 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.860 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.860 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.860 11:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.860 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.122 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.383 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:46.645 11:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.907 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:47.169 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.170 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.170 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.170 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.432 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.695 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.956 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:47.956 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:47.956 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.957 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.957 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:47.957 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.957 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.957 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.957 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.957 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.957 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.957 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.957 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:48.219 rmmod nvme_tcp 00:06:48.219 rmmod nvme_fabrics 00:06:48.219 rmmod nvme_keyring 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2278797 ']' 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2278797 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2278797 ']' 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2278797 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2278797 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2278797' 00:06:48.219 killing process with pid 2278797 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2278797 00:06:48.219 11:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2278797 00:06:48.808 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:48.808 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:48.808 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:48.808 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:48.808 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:48.808 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:48.808 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:48.808 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:48.808 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:48.808 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.808 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.808 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.353 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:51.353 00:06:51.353 real 0m49.160s 00:06:51.353 user 3m19.460s 00:06:51.353 sys 0m16.836s 00:06:51.353 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.353 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:51.353 ************************************ 00:06:51.353 END TEST nvmf_ns_hotplug_stress 00:06:51.353 ************************************ 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:51.354 ************************************ 00:06:51.354 START TEST nvmf_delete_subsystem 00:06:51.354 ************************************ 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:51.354 * Looking for test storage... 00:06:51.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.354 --rc genhtml_branch_coverage=1 00:06:51.354 --rc genhtml_function_coverage=1 00:06:51.354 --rc genhtml_legend=1 00:06:51.354 --rc geninfo_all_blocks=1 00:06:51.354 --rc geninfo_unexecuted_blocks=1 00:06:51.354 00:06:51.354 ' 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.354 --rc genhtml_branch_coverage=1 00:06:51.354 --rc genhtml_function_coverage=1 00:06:51.354 --rc genhtml_legend=1 00:06:51.354 --rc geninfo_all_blocks=1 00:06:51.354 --rc geninfo_unexecuted_blocks=1 00:06:51.354 00:06:51.354 ' 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.354 --rc genhtml_branch_coverage=1 00:06:51.354 --rc genhtml_function_coverage=1 00:06:51.354 --rc genhtml_legend=1 00:06:51.354 --rc geninfo_all_blocks=1 00:06:51.354 --rc geninfo_unexecuted_blocks=1 00:06:51.354 00:06:51.354 ' 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.354 --rc genhtml_branch_coverage=1 00:06:51.354 --rc genhtml_function_coverage=1 00:06:51.354 --rc genhtml_legend=1 00:06:51.354 --rc geninfo_all_blocks=1 00:06:51.354 --rc geninfo_unexecuted_blocks=1 00:06:51.354 00:06:51.354 ' 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.354 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:51.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:51.355 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:59.500 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:59.500 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:59.500 Found net devices under 0000:31:00.0: cvl_0_0 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:59.500 Found net devices under 0000:31:00.1: cvl_0_1 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.500 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:59.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:59.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:06:59.501 00:06:59.501 --- 10.0.0.2 ping statistics --- 00:06:59.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.501 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:59.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:59.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:06:59.501 00:06:59.501 --- 10.0.0.1 ping statistics --- 00:06:59.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.501 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2291577 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2291577 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2291577 ']' 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.501 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.501 [2024-12-07 11:18:57.904351] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:59.501 [2024-12-07 11:18:57.904456] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.501 [2024-12-07 11:18:58.041609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.501 [2024-12-07 11:18:58.141159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:59.501 [2024-12-07 11:18:58.141201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:59.501 [2024-12-07 11:18:58.141213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:59.501 [2024-12-07 11:18:58.141231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:59.501 [2024-12-07 11:18:58.141241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:59.501 [2024-12-07 11:18:58.143092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.501 [2024-12-07 11:18:58.143246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.501 [2024-12-07 11:18:58.703648] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.501 [2024-12-07 11:18:58.728374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.501 NULL1 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.501 Delay0 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.501 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.502 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2291748 00:06:59.502 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:59.502 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:59.771 [2024-12-07 11:18:58.866003] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:01.685 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:01.685 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.685 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.685 Read completed with error (sct=0, sc=8) 00:07:01.685 Write completed with error (sct=0, sc=8) 00:07:01.685 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 starting I/O failed: -6 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 [2024-12-07 11:19:01.000917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026280 is same with the state(6) to be set 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 starting I/O failed: -6 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 [2024-12-07 11:19:01.005063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030000 is same with the state(6) to be set 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Write completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.686 Read completed with error (sct=0, sc=8) 00:07:01.687 Read completed with error (sct=0, sc=8) 00:07:01.687 Read completed with error (sct=0, sc=8) 00:07:01.687 Write completed with error (sct=0, sc=8) 00:07:01.687 Write completed with error (sct=0, sc=8) 00:07:01.687 Read completed with error (sct=0, sc=8) 00:07:02.746 [2024-12-07 11:19:01.967766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025b00 is same with the state(6) to be set 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 [2024-12-07 11:19:02.003547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026a00 is same with the state(6) to be set 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 [2024-12-07 11:19:02.004115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026500 is same with the state(6) to be set 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 [2024-12-07 11:19:02.005137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030780 is same with the state(6) to be set 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 Read completed with error (sct=0, sc=8) 00:07:02.746 Write completed with error (sct=0, sc=8) 00:07:02.746 [2024-12-07 11:19:02.007392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030280 is same with the state(6) to be set 00:07:02.746 Initializing NVMe Controllers 00:07:02.746 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.746 Controller IO queue size 128, less than required. 00:07:02.747 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:02.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:02.747 Initialization complete. Launching workers. 00:07:02.747 ======================================================== 00:07:02.747 Latency(us) 00:07:02.747 Device Information : IOPS MiB/s Average min max 00:07:02.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.81 0.08 893308.22 361.11 1006770.74 00:07:02.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.41 0.07 954442.02 394.09 1010840.42 00:07:02.747 ======================================================== 00:07:02.747 Total : 318.22 0.16 921435.51 361.11 1010840.42 00:07:02.747 00:07:02.747 [2024-12-07 11:19:02.008451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000025b00 (9): Bad file descriptor 00:07:02.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:02.747 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.747 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:02.747 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2291748 00:07:02.747 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:03.316 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:03.316 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2291748 00:07:03.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2291748) - No such process 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2291748 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2291748 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2291748 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.317 [2024-12-07 11:19:02.537771] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2292610 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2292610 00:07:03.317 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.317 [2024-12-07 11:19:02.657138] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:03.886 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.886 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2292610 00:07:03.886 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:04.456 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:04.456 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2292610 00:07:04.456 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.028 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:05.028 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2292610 00:07:05.028 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.290 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:05.290 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2292610 00:07:05.290 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.863 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:05.863 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2292610 00:07:05.863 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.434 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.434 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2292610 00:07:06.434 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.696 Initializing NVMe Controllers 00:07:06.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:06.696 Controller IO queue size 128, less than required. 00:07:06.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:06.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:06.696 Initialization complete. Launching workers. 00:07:06.696 ======================================================== 00:07:06.696 Latency(us) 00:07:06.696 Device Information : IOPS MiB/s Average min max 00:07:06.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002128.57 1000134.89 1041051.56 00:07:06.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003425.71 1000388.02 1010062.88 00:07:06.696 ======================================================== 00:07:06.696 Total : 256.00 0.12 1002777.14 1000134.89 1041051.56 00:07:06.696 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2292610 00:07:06.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2292610) - No such process 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2292610 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:06.956 rmmod nvme_tcp 00:07:06.956 rmmod nvme_fabrics 00:07:06.956 rmmod nvme_keyring 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:06.956 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:06.957 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2291577 ']' 00:07:06.957 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2291577 00:07:06.957 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2291577 ']' 00:07:06.957 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2291577 00:07:06.957 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:06.957 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.957 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2291577 00:07:06.957 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.957 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.957 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2291577' 00:07:06.957 killing process with pid 2291577 00:07:06.957 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2291577 00:07:06.957 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2291577 00:07:07.901 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:07.901 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:07.901 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:07.901 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:07.901 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:07.901 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:07.901 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:07.901 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:07.901 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:07.901 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.901 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.901 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.815 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:09.815 00:07:09.815 real 0m18.846s 00:07:09.815 user 0m31.658s 00:07:09.815 sys 0m6.688s 00:07:09.815 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.815 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.815 ************************************ 00:07:09.815 END TEST nvmf_delete_subsystem 00:07:09.816 ************************************ 00:07:09.816 11:19:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:09.816 11:19:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:09.816 11:19:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.816 11:19:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:10.079 ************************************ 00:07:10.079 START TEST nvmf_host_management 00:07:10.079 ************************************ 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:10.079 * Looking for test storage... 00:07:10.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:10.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.079 --rc genhtml_branch_coverage=1 00:07:10.079 --rc genhtml_function_coverage=1 00:07:10.079 --rc genhtml_legend=1 00:07:10.079 --rc geninfo_all_blocks=1 00:07:10.079 --rc geninfo_unexecuted_blocks=1 00:07:10.079 00:07:10.079 ' 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:10.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.079 --rc genhtml_branch_coverage=1 00:07:10.079 --rc genhtml_function_coverage=1 00:07:10.079 --rc genhtml_legend=1 00:07:10.079 --rc geninfo_all_blocks=1 00:07:10.079 --rc geninfo_unexecuted_blocks=1 00:07:10.079 00:07:10.079 ' 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:10.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.079 --rc genhtml_branch_coverage=1 00:07:10.079 --rc genhtml_function_coverage=1 00:07:10.079 --rc genhtml_legend=1 00:07:10.079 --rc geninfo_all_blocks=1 00:07:10.079 --rc geninfo_unexecuted_blocks=1 00:07:10.079 00:07:10.079 ' 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:10.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.079 --rc genhtml_branch_coverage=1 00:07:10.079 --rc genhtml_function_coverage=1 00:07:10.079 --rc genhtml_legend=1 00:07:10.079 --rc geninfo_all_blocks=1 00:07:10.079 --rc geninfo_unexecuted_blocks=1 00:07:10.079 00:07:10.079 ' 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.079 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:10.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.080 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:10.342 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:10.342 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:10.342 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:18.487 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:18.487 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.487 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:18.488 Found net devices under 0000:31:00.0: cvl_0_0 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:18.488 Found net devices under 0000:31:00.1: cvl_0_1 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:18.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:07:18.488 00:07:18.488 --- 10.0.0.2 ping statistics --- 00:07:18.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.488 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:07:18.488 00:07:18.488 --- 10.0.0.1 ping statistics --- 00:07:18.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.488 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2297705 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2297705 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2297705 ']' 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.488 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.489 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.489 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.489 [2024-12-07 11:19:16.953898] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:18.489 [2024-12-07 11:19:16.954000] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.489 [2024-12-07 11:19:17.105865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.489 [2024-12-07 11:19:17.226919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.489 [2024-12-07 11:19:17.226988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.489 [2024-12-07 11:19:17.227002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.489 [2024-12-07 11:19:17.227028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.489 [2024-12-07 11:19:17.227040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.489 [2024-12-07 11:19:17.229881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.489 [2024-12-07 11:19:17.230049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.489 [2024-12-07 11:19:17.230167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.489 [2024-12-07 11:19:17.230363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.489 [2024-12-07 11:19:17.761918] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.489 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.749 Malloc0 00:07:18.749 [2024-12-07 11:19:17.874543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2297964 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2297964 /var/tmp/bdevperf.sock 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2297964 ']' 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:18.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:18.749 { 00:07:18.749 "params": { 00:07:18.749 "name": "Nvme$subsystem", 00:07:18.749 "trtype": "$TEST_TRANSPORT", 00:07:18.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.749 "adrfam": "ipv4", 00:07:18.749 "trsvcid": "$NVMF_PORT", 00:07:18.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.749 "hdgst": ${hdgst:-false}, 00:07:18.749 "ddgst": ${ddgst:-false} 00:07:18.749 }, 00:07:18.749 "method": "bdev_nvme_attach_controller" 00:07:18.749 } 00:07:18.749 EOF 00:07:18.749 )") 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:18.749 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:18.749 "params": { 00:07:18.749 "name": "Nvme0", 00:07:18.749 "trtype": "tcp", 00:07:18.749 "traddr": "10.0.0.2", 00:07:18.749 "adrfam": "ipv4", 00:07:18.749 "trsvcid": "4420", 00:07:18.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:18.749 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:18.749 "hdgst": false, 00:07:18.749 "ddgst": false 00:07:18.749 }, 00:07:18.749 "method": "bdev_nvme_attach_controller" 00:07:18.749 }' 00:07:18.749 [2024-12-07 11:19:18.016588] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:18.749 [2024-12-07 11:19:18.016695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297964 ] 00:07:19.009 [2024-12-07 11:19:18.143280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.009 [2024-12-07 11:19:18.241291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.578 Running I/O for 10 seconds... 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=72 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 72 -ge 100 ']' 00:07:19.579 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=576 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 576 -ge 100 ']' 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.842 [2024-12-07 11:19:19.162789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:19.842 [2024-12-07 11:19:19.162842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:19.842 [2024-12-07 11:19:19.162855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:19.842 [2024-12-07 11:19:19.162864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:19.842 [2024-12-07 11:19:19.162874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:19.842 [2024-12-07 11:19:19.162883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:19.842 [2024-12-07 11:19:19.162892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:19.842 [2024-12-07 11:19:19.162901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:19.842 [2024-12-07 11:19:19.168404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.842 [2024-12-07 11:19:19.168451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.842 [2024-12-07 11:19:19.168488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.842 [2024-12-07 11:19:19.168511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.842 [2024-12-07 11:19:19.168532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:07:19.842 [2024-12-07 11:19:19.168599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.842 [2024-12-07 11:19:19.168770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.168988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.168999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.842 [2024-12-07 11:19:19.169348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.842 [2024-12-07 11:19:19.169368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.169981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.169994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.170005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.170022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.170033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.170045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.170056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.170069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.170080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.170093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.170103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.170117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.170127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.170140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.843 [2024-12-07 11:19:19.170150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.843 [2024-12-07 11:19:19.171611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:19.843 task offset: 82560 on job bdev=Nvme0n1 fails 00:07:19.843 00:07:19.843 Latency(us) 00:07:19.843 [2024-12-07T10:19:19.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.843 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:19.843 Job: Nvme0n1 ended in about 0.44 seconds with error 00:07:19.843 Verification LBA range: start 0x0 length 0x400 00:07:19.843 Nvme0n1 : 0.44 1469.45 91.84 146.95 0.00 38396.62 2594.13 37137.07 00:07:19.843 [2024-12-07T10:19:19.197Z] =================================================================================================================== 00:07:19.843 [2024-12-07T10:19:19.197Z] Total : 1469.45 91.84 146.95 0.00 38396.62 2594.13 37137.07 00:07:19.843 [2024-12-07 11:19:19.175905] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.843 [2024-12-07 11:19:19.175941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:07:19.843 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.843 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:19.843 [2024-12-07 11:19:19.183621] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:21.228 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2297964 00:07:21.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2297964) - No such process 00:07:21.228 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:21.228 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:21.228 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:21.228 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:21.228 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:21.228 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:21.228 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:21.228 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:21.228 { 00:07:21.228 "params": { 00:07:21.228 "name": "Nvme$subsystem", 00:07:21.228 "trtype": "$TEST_TRANSPORT", 00:07:21.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:21.228 "adrfam": "ipv4", 00:07:21.228 "trsvcid": "$NVMF_PORT", 00:07:21.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:21.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:21.228 "hdgst": ${hdgst:-false}, 00:07:21.228 "ddgst": ${ddgst:-false} 00:07:21.228 }, 00:07:21.228 "method": "bdev_nvme_attach_controller" 00:07:21.228 } 00:07:21.228 EOF 00:07:21.228 )") 00:07:21.228 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:21.228 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:21.228 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:21.228 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:21.228 "params": { 00:07:21.228 "name": "Nvme0", 00:07:21.228 "trtype": "tcp", 00:07:21.228 "traddr": "10.0.0.2", 00:07:21.228 "adrfam": "ipv4", 00:07:21.228 "trsvcid": "4420", 00:07:21.228 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:21.228 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:21.228 "hdgst": false, 00:07:21.228 "ddgst": false 00:07:21.228 }, 00:07:21.228 "method": "bdev_nvme_attach_controller" 00:07:21.228 }' 00:07:21.228 [2024-12-07 11:19:20.268990] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:21.228 [2024-12-07 11:19:20.269102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2298439 ] 00:07:21.228 [2024-12-07 11:19:20.394919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.228 [2024-12-07 11:19:20.493280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.813 Running I/O for 1 seconds... 00:07:22.754 1472.00 IOPS, 92.00 MiB/s 00:07:22.754 Latency(us) 00:07:22.754 [2024-12-07T10:19:22.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.754 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:22.754 Verification LBA range: start 0x0 length 0x400 00:07:22.754 Nvme0n1 : 1.01 1519.56 94.97 0.00 0.00 41366.17 8301.23 35389.44 00:07:22.754 [2024-12-07T10:19:22.108Z] =================================================================================================================== 00:07:22.754 [2024-12-07T10:19:22.108Z] Total : 1519.56 94.97 0.00 0.00 41366.17 8301.23 35389.44 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:23.329 rmmod nvme_tcp 00:07:23.329 rmmod nvme_fabrics 00:07:23.329 rmmod nvme_keyring 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2297705 ']' 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2297705 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2297705 ']' 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2297705 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.329 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2297705 00:07:23.591 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:23.591 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:23.591 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2297705' 00:07:23.591 killing process with pid 2297705 00:07:23.591 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2297705 00:07:23.591 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2297705 00:07:24.163 [2024-12-07 11:19:23.334248] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:24.163 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:24.163 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:24.163 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:24.163 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:24.163 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:24.163 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:24.163 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:24.163 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:24.163 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:24.163 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.163 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.163 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:26.708 00:07:26.708 real 0m16.299s 00:07:26.708 user 0m30.119s 00:07:26.708 sys 0m6.956s 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.708 ************************************ 00:07:26.708 END TEST nvmf_host_management 00:07:26.708 ************************************ 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.708 ************************************ 00:07:26.708 START TEST nvmf_lvol 00:07:26.708 ************************************ 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:26.708 * Looking for test storage... 00:07:26.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:26.708 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:26.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.709 --rc genhtml_branch_coverage=1 00:07:26.709 --rc genhtml_function_coverage=1 00:07:26.709 --rc genhtml_legend=1 00:07:26.709 --rc geninfo_all_blocks=1 00:07:26.709 --rc geninfo_unexecuted_blocks=1 00:07:26.709 00:07:26.709 ' 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:26.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.709 --rc genhtml_branch_coverage=1 00:07:26.709 --rc genhtml_function_coverage=1 00:07:26.709 --rc genhtml_legend=1 00:07:26.709 --rc geninfo_all_blocks=1 00:07:26.709 --rc geninfo_unexecuted_blocks=1 00:07:26.709 00:07:26.709 ' 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:26.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.709 --rc genhtml_branch_coverage=1 00:07:26.709 --rc genhtml_function_coverage=1 00:07:26.709 --rc genhtml_legend=1 00:07:26.709 --rc geninfo_all_blocks=1 00:07:26.709 --rc geninfo_unexecuted_blocks=1 00:07:26.709 00:07:26.709 ' 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:26.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.709 --rc genhtml_branch_coverage=1 00:07:26.709 --rc genhtml_function_coverage=1 00:07:26.709 --rc genhtml_legend=1 00:07:26.709 --rc geninfo_all_blocks=1 00:07:26.709 --rc geninfo_unexecuted_blocks=1 00:07:26.709 00:07:26.709 ' 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:26.709 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:34.856 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:34.856 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:34.856 Found net devices under 0000:31:00.0: cvl_0_0 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:34.856 Found net devices under 0000:31:00.1: cvl_0_1 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.856 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.857 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:34.857 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.857 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.857 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:34.857 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:34.857 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.857 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.857 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:34.857 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:34.857 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.857 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:34.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:07:34.857 00:07:34.857 --- 10.0.0.2 ping statistics --- 00:07:34.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.857 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:07:34.857 00:07:34.857 --- 10.0.0.1 ping statistics --- 00:07:34.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.857 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2303260 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2303260 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2303260 ']' 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.857 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:34.857 [2024-12-07 11:19:33.387142] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:34.857 [2024-12-07 11:19:33.387266] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.857 [2024-12-07 11:19:33.539008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.857 [2024-12-07 11:19:33.640200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.857 [2024-12-07 11:19:33.640245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.857 [2024-12-07 11:19:33.640258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.857 [2024-12-07 11:19:33.640269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.857 [2024-12-07 11:19:33.640279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.857 [2024-12-07 11:19:33.642364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.857 [2024-12-07 11:19:33.642517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.857 [2024-12-07 11:19:33.642518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.857 11:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.857 11:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:34.857 11:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:34.857 11:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:34.857 11:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:34.857 11:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.857 11:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:35.117 [2024-12-07 11:19:34.353549] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.117 11:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:35.377 11:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:35.377 11:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:35.637 11:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:35.637 11:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:35.896 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:35.896 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2af12e18-0268-4e50-b645-0ecffc2cf3a9 00:07:35.896 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2af12e18-0268-4e50-b645-0ecffc2cf3a9 lvol 20 00:07:36.155 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ec449d0b-51b1-4fd1-8d2d-fc8a772d17de 00:07:36.155 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:36.415 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ec449d0b-51b1-4fd1-8d2d-fc8a772d17de 00:07:36.675 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:36.675 [2024-12-07 11:19:35.925368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.675 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.935 11:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2303910 00:07:36.935 11:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:36.935 11:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:37.877 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ec449d0b-51b1-4fd1-8d2d-fc8a772d17de MY_SNAPSHOT 00:07:38.137 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=97fe7775-ec23-48bf-a840-f98675061a12 00:07:38.137 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ec449d0b-51b1-4fd1-8d2d-fc8a772d17de 30 00:07:38.397 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 97fe7775-ec23-48bf-a840-f98675061a12 MY_CLONE 00:07:38.656 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=00ebc0b7-f591-46e4-aa1d-65022f1b65a9 00:07:38.656 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 00ebc0b7-f591-46e4-aa1d-65022f1b65a9 00:07:39.226 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2303910 00:07:47.477 Initializing NVMe Controllers 00:07:47.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:47.477 Controller IO queue size 128, less than required. 00:07:47.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:47.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:47.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:47.477 Initialization complete. Launching workers. 00:07:47.477 ======================================================== 00:07:47.477 Latency(us) 00:07:47.477 Device Information : IOPS MiB/s Average min max 00:07:47.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16042.70 62.67 7980.66 606.29 114201.31 00:07:47.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11439.40 44.69 11195.69 4544.45 90822.76 00:07:47.477 ======================================================== 00:07:47.477 Total : 27482.10 107.35 9318.91 606.29 114201.31 00:07:47.477 00:07:47.477 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:47.477 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ec449d0b-51b1-4fd1-8d2d-fc8a772d17de 00:07:47.737 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2af12e18-0268-4e50-b645-0ecffc2cf3a9 00:07:47.737 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:47.737 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:47.737 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:47.737 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:47.737 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:47.737 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:47.737 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:47.737 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:47.737 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:47.737 rmmod nvme_tcp 00:07:47.737 rmmod nvme_fabrics 00:07:47.996 rmmod nvme_keyring 00:07:47.996 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:47.996 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:47.996 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:47.996 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2303260 ']' 00:07:47.996 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2303260 00:07:47.996 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2303260 ']' 00:07:47.996 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2303260 00:07:47.996 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:47.996 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.996 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2303260 00:07:47.996 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.996 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.997 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2303260' 00:07:47.997 killing process with pid 2303260 00:07:47.997 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2303260 00:07:47.997 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2303260 00:07:48.936 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:48.936 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:48.936 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:48.936 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:48.936 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:48.936 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:48.936 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:48.936 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.936 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:48.936 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.936 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.936 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:51.487 00:07:51.487 real 0m24.702s 00:07:51.487 user 1m6.210s 00:07:51.487 sys 0m8.711s 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:51.487 ************************************ 00:07:51.487 END TEST nvmf_lvol 00:07:51.487 ************************************ 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.487 ************************************ 00:07:51.487 START TEST nvmf_lvs_grow 00:07:51.487 ************************************ 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:51.487 * Looking for test storage... 00:07:51.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:51.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.487 --rc genhtml_branch_coverage=1 00:07:51.487 --rc genhtml_function_coverage=1 00:07:51.487 --rc genhtml_legend=1 00:07:51.487 --rc geninfo_all_blocks=1 00:07:51.487 --rc geninfo_unexecuted_blocks=1 00:07:51.487 00:07:51.487 ' 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:51.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.487 --rc genhtml_branch_coverage=1 00:07:51.487 --rc genhtml_function_coverage=1 00:07:51.487 --rc genhtml_legend=1 00:07:51.487 --rc geninfo_all_blocks=1 00:07:51.487 --rc geninfo_unexecuted_blocks=1 00:07:51.487 00:07:51.487 ' 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:51.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.487 --rc genhtml_branch_coverage=1 00:07:51.487 --rc genhtml_function_coverage=1 00:07:51.487 --rc genhtml_legend=1 00:07:51.487 --rc geninfo_all_blocks=1 00:07:51.487 --rc geninfo_unexecuted_blocks=1 00:07:51.487 00:07:51.487 ' 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:51.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.487 --rc genhtml_branch_coverage=1 00:07:51.487 --rc genhtml_function_coverage=1 00:07:51.487 --rc genhtml_legend=1 00:07:51.487 --rc geninfo_all_blocks=1 00:07:51.487 --rc geninfo_unexecuted_blocks=1 00:07:51.487 00:07:51.487 ' 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.487 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.488 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.624 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:59.625 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:59.625 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:59.625 Found net devices under 0000:31:00.0: cvl_0_0 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:59.625 Found net devices under 0000:31:00.1: cvl_0_1 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:59.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:07:59.625 00:07:59.625 --- 10.0.0.2 ping statistics --- 00:07:59.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.625 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:07:59.625 00:07:59.625 --- 10.0.0.1 ping statistics --- 00:07:59.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.625 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:07:59.625 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.626 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:59.626 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:59.626 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.626 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:59.626 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:59.626 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.626 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:59.626 11:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2310629 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2310629 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2310629 ']' 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:59.626 [2024-12-07 11:19:58.161767] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:59.626 [2024-12-07 11:19:58.161896] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.626 [2024-12-07 11:19:58.297766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.626 [2024-12-07 11:19:58.393552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.626 [2024-12-07 11:19:58.393597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.626 [2024-12-07 11:19:58.393609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.626 [2024-12-07 11:19:58.393622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.626 [2024-12-07 11:19:58.393632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.626 [2024-12-07 11:19:58.394871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.626 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:59.887 [2024-12-07 11:19:59.111485] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:59.887 ************************************ 00:07:59.887 START TEST lvs_grow_clean 00:07:59.887 ************************************ 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:59.887 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:00.148 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:00.148 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:00.408 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9394fe1b-0cf3-4108-973a-ceb092e2ec3b 00:08:00.409 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9394fe1b-0cf3-4108-973a-ceb092e2ec3b 00:08:00.409 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:00.409 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:00.409 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:00.409 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9394fe1b-0cf3-4108-973a-ceb092e2ec3b lvol 150 00:08:00.669 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=109c1662-24e7-4f7e-b898-9f6a77b38179 00:08:00.669 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:00.669 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:00.929 [2024-12-07 11:20:00.080501] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:00.929 [2024-12-07 11:20:00.080580] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:00.929 true 00:08:00.929 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9394fe1b-0cf3-4108-973a-ceb092e2ec3b 00:08:00.929 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:00.929 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:00.929 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:01.188 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 109c1662-24e7-4f7e-b898-9f6a77b38179 00:08:01.449 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:01.449 [2024-12-07 11:20:00.754758] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.449 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:01.711 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2311083 00:08:01.711 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:01.711 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:01.711 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2311083 /var/tmp/bdevperf.sock 00:08:01.711 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2311083 ']' 00:08:01.711 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:01.711 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.711 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:01.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:01.711 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.711 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:01.711 [2024-12-07 11:20:01.013246] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:01.711 [2024-12-07 11:20:01.013353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311083 ] 00:08:01.970 [2024-12-07 11:20:01.155692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.971 [2024-12-07 11:20:01.252521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.541 11:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.541 11:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:02.541 11:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:02.800 Nvme0n1 00:08:02.800 11:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:03.061 [ 00:08:03.061 { 00:08:03.061 "name": "Nvme0n1", 00:08:03.061 "aliases": [ 00:08:03.061 "109c1662-24e7-4f7e-b898-9f6a77b38179" 00:08:03.061 ], 00:08:03.061 "product_name": "NVMe disk", 00:08:03.061 "block_size": 4096, 00:08:03.061 "num_blocks": 38912, 00:08:03.061 "uuid": "109c1662-24e7-4f7e-b898-9f6a77b38179", 00:08:03.061 "numa_id": 0, 00:08:03.061 "assigned_rate_limits": { 00:08:03.061 "rw_ios_per_sec": 0, 00:08:03.061 "rw_mbytes_per_sec": 0, 00:08:03.061 "r_mbytes_per_sec": 0, 00:08:03.061 "w_mbytes_per_sec": 0 00:08:03.061 }, 00:08:03.061 "claimed": false, 00:08:03.061 "zoned": false, 00:08:03.061 "supported_io_types": { 00:08:03.061 "read": true, 00:08:03.061 "write": true, 00:08:03.061 "unmap": true, 00:08:03.061 "flush": true, 00:08:03.061 "reset": true, 00:08:03.061 "nvme_admin": true, 00:08:03.061 "nvme_io": true, 00:08:03.061 "nvme_io_md": false, 00:08:03.061 "write_zeroes": true, 00:08:03.061 "zcopy": false, 00:08:03.061 "get_zone_info": false, 00:08:03.061 "zone_management": false, 00:08:03.061 "zone_append": false, 00:08:03.061 "compare": true, 00:08:03.061 "compare_and_write": true, 00:08:03.061 "abort": true, 00:08:03.061 "seek_hole": false, 00:08:03.061 "seek_data": false, 00:08:03.061 "copy": true, 00:08:03.061 "nvme_iov_md": false 00:08:03.061 }, 00:08:03.061 "memory_domains": [ 00:08:03.061 { 00:08:03.061 "dma_device_id": "system", 00:08:03.061 "dma_device_type": 1 00:08:03.061 } 00:08:03.061 ], 00:08:03.061 "driver_specific": { 00:08:03.061 "nvme": [ 00:08:03.061 { 00:08:03.061 "trid": { 00:08:03.061 "trtype": "TCP", 00:08:03.061 "adrfam": "IPv4", 00:08:03.061 "traddr": "10.0.0.2", 00:08:03.061 "trsvcid": "4420", 00:08:03.061 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:03.061 }, 00:08:03.061 "ctrlr_data": { 00:08:03.061 "cntlid": 1, 00:08:03.061 "vendor_id": "0x8086", 00:08:03.061 "model_number": "SPDK bdev Controller", 00:08:03.061 "serial_number": "SPDK0", 00:08:03.061 "firmware_revision": "25.01", 00:08:03.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:03.061 "oacs": { 00:08:03.061 "security": 0, 00:08:03.061 "format": 0, 00:08:03.061 "firmware": 0, 00:08:03.061 "ns_manage": 0 00:08:03.061 }, 00:08:03.061 "multi_ctrlr": true, 00:08:03.061 "ana_reporting": false 00:08:03.061 }, 00:08:03.061 "vs": { 00:08:03.061 "nvme_version": "1.3" 00:08:03.061 }, 00:08:03.061 "ns_data": { 00:08:03.061 "id": 1, 00:08:03.061 "can_share": true 00:08:03.061 } 00:08:03.061 } 00:08:03.061 ], 00:08:03.061 "mp_policy": "active_passive" 00:08:03.061 } 00:08:03.061 } 00:08:03.061 ] 00:08:03.061 11:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2311398 00:08:03.061 11:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:03.061 11:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:03.061 Running I/O for 10 seconds... 00:08:04.003 Latency(us) 00:08:04.003 [2024-12-07T10:20:03.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.003 Nvme0n1 : 1.00 16142.00 63.05 0.00 0.00 0.00 0.00 0.00 00:08:04.003 [2024-12-07T10:20:03.357Z] =================================================================================================================== 00:08:04.003 [2024-12-07T10:20:03.357Z] Total : 16142.00 63.05 0.00 0.00 0.00 0.00 0.00 00:08:04.003 00:08:04.942 11:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9394fe1b-0cf3-4108-973a-ceb092e2ec3b 00:08:04.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.942 Nvme0n1 : 2.00 16204.50 63.30 0.00 0.00 0.00 0.00 0.00 00:08:04.942 [2024-12-07T10:20:04.296Z] =================================================================================================================== 00:08:04.942 [2024-12-07T10:20:04.296Z] Total : 16204.50 63.30 0.00 0.00 0.00 0.00 0.00 00:08:04.942 00:08:05.201 true 00:08:05.201 11:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9394fe1b-0cf3-4108-973a-ceb092e2ec3b 00:08:05.201 11:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:05.201 11:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:05.201 11:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:05.201 11:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2311398 00:08:06.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.140 Nvme0n1 : 3.00 16243.67 63.45 0.00 0.00 0.00 0.00 0.00 00:08:06.140 [2024-12-07T10:20:05.494Z] =================================================================================================================== 00:08:06.141 [2024-12-07T10:20:05.495Z] Total : 16243.67 63.45 0.00 0.00 0.00 0.00 0.00 00:08:06.141 00:08:07.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.082 Nvme0n1 : 4.00 16287.00 63.62 0.00 0.00 0.00 0.00 0.00 00:08:07.082 [2024-12-07T10:20:06.436Z] =================================================================================================================== 00:08:07.082 [2024-12-07T10:20:06.436Z] Total : 16287.00 63.62 0.00 0.00 0.00 0.00 0.00 00:08:07.082 00:08:08.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.023 Nvme0n1 : 5.00 16311.40 63.72 0.00 0.00 0.00 0.00 0.00 00:08:08.023 [2024-12-07T10:20:07.377Z] =================================================================================================================== 00:08:08.023 [2024-12-07T10:20:07.377Z] Total : 16311.40 63.72 0.00 0.00 0.00 0.00 0.00 00:08:08.023 00:08:08.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.963 Nvme0n1 : 6.00 16347.17 63.86 0.00 0.00 0.00 0.00 0.00 00:08:08.963 [2024-12-07T10:20:08.318Z] =================================================================================================================== 00:08:08.964 [2024-12-07T10:20:08.318Z] Total : 16347.17 63.86 0.00 0.00 0.00 0.00 0.00 00:08:08.964 00:08:10.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.347 Nvme0n1 : 7.00 16359.57 63.90 0.00 0.00 0.00 0.00 0.00 00:08:10.347 [2024-12-07T10:20:09.701Z] =================================================================================================================== 00:08:10.347 [2024-12-07T10:20:09.702Z] Total : 16359.57 63.90 0.00 0.00 0.00 0.00 0.00 00:08:10.348 00:08:11.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.289 Nvme0n1 : 8.00 16367.12 63.93 0.00 0.00 0.00 0.00 0.00 00:08:11.289 [2024-12-07T10:20:10.643Z] =================================================================================================================== 00:08:11.289 [2024-12-07T10:20:10.643Z] Total : 16367.12 63.93 0.00 0.00 0.00 0.00 0.00 00:08:11.289 00:08:12.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.240 Nvme0n1 : 9.00 16382.22 63.99 0.00 0.00 0.00 0.00 0.00 00:08:12.240 [2024-12-07T10:20:11.594Z] =================================================================================================================== 00:08:12.240 [2024-12-07T10:20:11.594Z] Total : 16382.22 63.99 0.00 0.00 0.00 0.00 0.00 00:08:12.240 00:08:13.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.181 Nvme0n1 : 10.00 16394.00 64.04 0.00 0.00 0.00 0.00 0.00 00:08:13.181 [2024-12-07T10:20:12.535Z] =================================================================================================================== 00:08:13.181 [2024-12-07T10:20:12.535Z] Total : 16394.00 64.04 0.00 0.00 0.00 0.00 0.00 00:08:13.181 00:08:13.181 00:08:13.181 Latency(us) 00:08:13.181 [2024-12-07T10:20:12.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.181 Nvme0n1 : 10.01 16395.94 64.05 0.00 0.00 7803.01 3495.25 14745.60 00:08:13.181 [2024-12-07T10:20:12.535Z] =================================================================================================================== 00:08:13.181 [2024-12-07T10:20:12.535Z] Total : 16395.94 64.05 0.00 0.00 7803.01 3495.25 14745.60 00:08:13.181 { 00:08:13.181 "results": [ 00:08:13.181 { 00:08:13.181 "job": "Nvme0n1", 00:08:13.181 "core_mask": "0x2", 00:08:13.181 "workload": "randwrite", 00:08:13.181 "status": "finished", 00:08:13.181 "queue_depth": 128, 00:08:13.181 "io_size": 4096, 00:08:13.181 "runtime": 10.006623, 00:08:13.181 "iops": 16395.940968296698, 00:08:13.181 "mibps": 64.04664440740898, 00:08:13.181 "io_failed": 0, 00:08:13.181 "io_timeout": 0, 00:08:13.181 "avg_latency_us": 7803.008654297812, 00:08:13.181 "min_latency_us": 3495.2533333333336, 00:08:13.181 "max_latency_us": 14745.6 00:08:13.181 } 00:08:13.181 ], 00:08:13.181 "core_count": 1 00:08:13.181 } 00:08:13.181 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2311083 00:08:13.181 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2311083 ']' 00:08:13.181 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2311083 00:08:13.181 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:13.181 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.181 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2311083 00:08:13.181 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:13.181 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:13.181 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2311083' 00:08:13.181 killing process with pid 2311083 00:08:13.181 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2311083 00:08:13.181 Received shutdown signal, test time was about 10.000000 seconds 00:08:13.181 00:08:13.181 Latency(us) 00:08:13.181 [2024-12-07T10:20:12.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.181 [2024-12-07T10:20:12.535Z] =================================================================================================================== 00:08:13.181 [2024-12-07T10:20:12.535Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:13.181 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2311083 00:08:13.753 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:13.753 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:14.014 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9394fe1b-0cf3-4108-973a-ceb092e2ec3b 00:08:14.014 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:14.275 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:14.275 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:14.275 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:14.275 [2024-12-07 11:20:13.559432] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:14.275 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9394fe1b-0cf3-4108-973a-ceb092e2ec3b 00:08:14.275 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:14.275 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9394fe1b-0cf3-4108-973a-ceb092e2ec3b 00:08:14.275 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.275 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.275 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.275 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.275 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.275 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.275 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.276 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:14.276 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9394fe1b-0cf3-4108-973a-ceb092e2ec3b 00:08:14.538 request: 00:08:14.538 { 00:08:14.538 "uuid": "9394fe1b-0cf3-4108-973a-ceb092e2ec3b", 00:08:14.538 "method": "bdev_lvol_get_lvstores", 00:08:14.538 "req_id": 1 00:08:14.538 } 00:08:14.538 Got JSON-RPC error response 00:08:14.538 response: 00:08:14.538 { 00:08:14.538 "code": -19, 00:08:14.538 "message": "No such device" 00:08:14.538 } 00:08:14.538 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:14.538 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:14.538 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:14.538 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:14.538 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:14.799 aio_bdev 00:08:14.799 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 109c1662-24e7-4f7e-b898-9f6a77b38179 00:08:14.799 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=109c1662-24e7-4f7e-b898-9f6a77b38179 00:08:14.799 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.799 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:14.799 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.799 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.799 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:14.799 11:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 109c1662-24e7-4f7e-b898-9f6a77b38179 -t 2000 00:08:15.060 [ 00:08:15.060 { 00:08:15.060 "name": "109c1662-24e7-4f7e-b898-9f6a77b38179", 00:08:15.060 "aliases": [ 00:08:15.060 "lvs/lvol" 00:08:15.060 ], 00:08:15.060 "product_name": "Logical Volume", 00:08:15.060 "block_size": 4096, 00:08:15.060 "num_blocks": 38912, 00:08:15.060 "uuid": "109c1662-24e7-4f7e-b898-9f6a77b38179", 00:08:15.060 "assigned_rate_limits": { 00:08:15.060 "rw_ios_per_sec": 0, 00:08:15.060 "rw_mbytes_per_sec": 0, 00:08:15.060 "r_mbytes_per_sec": 0, 00:08:15.060 "w_mbytes_per_sec": 0 00:08:15.060 }, 00:08:15.060 "claimed": false, 00:08:15.060 "zoned": false, 00:08:15.060 "supported_io_types": { 00:08:15.060 "read": true, 00:08:15.060 "write": true, 00:08:15.060 "unmap": true, 00:08:15.060 "flush": false, 00:08:15.060 "reset": true, 00:08:15.060 "nvme_admin": false, 00:08:15.060 "nvme_io": false, 00:08:15.060 "nvme_io_md": false, 00:08:15.060 "write_zeroes": true, 00:08:15.060 "zcopy": false, 00:08:15.060 "get_zone_info": false, 00:08:15.060 "zone_management": false, 00:08:15.060 "zone_append": false, 00:08:15.060 "compare": false, 00:08:15.060 "compare_and_write": false, 00:08:15.060 "abort": false, 00:08:15.060 "seek_hole": true, 00:08:15.060 "seek_data": true, 00:08:15.060 "copy": false, 00:08:15.060 "nvme_iov_md": false 00:08:15.060 }, 00:08:15.060 "driver_specific": { 00:08:15.060 "lvol": { 00:08:15.060 "lvol_store_uuid": "9394fe1b-0cf3-4108-973a-ceb092e2ec3b", 00:08:15.060 "base_bdev": "aio_bdev", 00:08:15.060 "thin_provision": false, 00:08:15.060 "num_allocated_clusters": 38, 00:08:15.060 "snapshot": false, 00:08:15.060 "clone": false, 00:08:15.060 "esnap_clone": false 00:08:15.060 } 00:08:15.060 } 00:08:15.060 } 00:08:15.060 ] 00:08:15.060 11:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:15.060 11:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9394fe1b-0cf3-4108-973a-ceb092e2ec3b 00:08:15.060 11:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:15.321 11:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:15.321 11:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9394fe1b-0cf3-4108-973a-ceb092e2ec3b 00:08:15.321 11:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:15.321 11:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:15.321 11:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 109c1662-24e7-4f7e-b898-9f6a77b38179 00:08:15.582 11:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9394fe1b-0cf3-4108-973a-ceb092e2ec3b 00:08:15.843 11:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:15.843 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:15.843 00:08:15.843 real 0m15.978s 00:08:15.843 user 0m15.561s 00:08:15.843 sys 0m1.409s 00:08:15.843 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.843 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:15.843 ************************************ 00:08:15.843 END TEST lvs_grow_clean 00:08:15.843 ************************************ 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.103 ************************************ 00:08:16.103 START TEST lvs_grow_dirty 00:08:16.103 ************************************ 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:16.103 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:16.363 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0b13e9cf-d743-46ce-b218-558b127a7005 00:08:16.363 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b13e9cf-d743-46ce-b218-558b127a7005 00:08:16.363 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:16.622 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:16.623 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:16.623 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0b13e9cf-d743-46ce-b218-558b127a7005 lvol 150 00:08:16.884 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d809beff-57ff-4e1d-b356-2a3325d19bc1 00:08:16.884 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:16.884 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:16.884 [2024-12-07 11:20:16.129500] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:16.884 [2024-12-07 11:20:16.129575] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:16.884 true 00:08:16.884 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b13e9cf-d743-46ce-b218-558b127a7005 00:08:16.884 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:17.145 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:17.145 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:17.145 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d809beff-57ff-4e1d-b356-2a3325d19bc1 00:08:17.405 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:17.666 [2024-12-07 11:20:16.799765] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.666 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.666 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2314478 00:08:17.666 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:17.666 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2314478 /var/tmp/bdevperf.sock 00:08:17.666 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2314478 ']' 00:08:17.666 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:17.666 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.666 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:17.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:17.666 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.666 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:17.666 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:17.927 [2024-12-07 11:20:17.056719] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:17.927 [2024-12-07 11:20:17.056822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314478 ] 00:08:17.927 [2024-12-07 11:20:17.198027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.187 [2024-12-07 11:20:17.297199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.758 11:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.758 11:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:18.758 11:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:19.018 Nvme0n1 00:08:19.018 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:19.278 [ 00:08:19.278 { 00:08:19.278 "name": "Nvme0n1", 00:08:19.278 "aliases": [ 00:08:19.278 "d809beff-57ff-4e1d-b356-2a3325d19bc1" 00:08:19.278 ], 00:08:19.278 "product_name": "NVMe disk", 00:08:19.278 "block_size": 4096, 00:08:19.278 "num_blocks": 38912, 00:08:19.278 "uuid": "d809beff-57ff-4e1d-b356-2a3325d19bc1", 00:08:19.278 "numa_id": 0, 00:08:19.278 "assigned_rate_limits": { 00:08:19.278 "rw_ios_per_sec": 0, 00:08:19.278 "rw_mbytes_per_sec": 0, 00:08:19.278 "r_mbytes_per_sec": 0, 00:08:19.278 "w_mbytes_per_sec": 0 00:08:19.278 }, 00:08:19.278 "claimed": false, 00:08:19.278 "zoned": false, 00:08:19.278 "supported_io_types": { 00:08:19.278 "read": true, 00:08:19.278 "write": true, 00:08:19.278 "unmap": true, 00:08:19.278 "flush": true, 00:08:19.278 "reset": true, 00:08:19.278 "nvme_admin": true, 00:08:19.278 "nvme_io": true, 00:08:19.278 "nvme_io_md": false, 00:08:19.278 "write_zeroes": true, 00:08:19.278 "zcopy": false, 00:08:19.278 "get_zone_info": false, 00:08:19.278 "zone_management": false, 00:08:19.278 "zone_append": false, 00:08:19.278 "compare": true, 00:08:19.278 "compare_and_write": true, 00:08:19.278 "abort": true, 00:08:19.278 "seek_hole": false, 00:08:19.278 "seek_data": false, 00:08:19.278 "copy": true, 00:08:19.278 "nvme_iov_md": false 00:08:19.278 }, 00:08:19.278 "memory_domains": [ 00:08:19.278 { 00:08:19.278 "dma_device_id": "system", 00:08:19.278 "dma_device_type": 1 00:08:19.278 } 00:08:19.278 ], 00:08:19.278 "driver_specific": { 00:08:19.278 "nvme": [ 00:08:19.278 { 00:08:19.278 "trid": { 00:08:19.278 "trtype": "TCP", 00:08:19.278 "adrfam": "IPv4", 00:08:19.278 "traddr": "10.0.0.2", 00:08:19.278 "trsvcid": "4420", 00:08:19.278 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:19.278 }, 00:08:19.278 "ctrlr_data": { 00:08:19.279 "cntlid": 1, 00:08:19.279 "vendor_id": "0x8086", 00:08:19.279 "model_number": "SPDK bdev Controller", 00:08:19.279 "serial_number": "SPDK0", 00:08:19.279 "firmware_revision": "25.01", 00:08:19.279 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:19.279 "oacs": { 00:08:19.279 "security": 0, 00:08:19.279 "format": 0, 00:08:19.279 "firmware": 0, 00:08:19.279 "ns_manage": 0 00:08:19.279 }, 00:08:19.279 "multi_ctrlr": true, 00:08:19.279 "ana_reporting": false 00:08:19.279 }, 00:08:19.279 "vs": { 00:08:19.279 "nvme_version": "1.3" 00:08:19.279 }, 00:08:19.279 "ns_data": { 00:08:19.279 "id": 1, 00:08:19.279 "can_share": true 00:08:19.279 } 00:08:19.279 } 00:08:19.279 ], 00:08:19.279 "mp_policy": "active_passive" 00:08:19.279 } 00:08:19.279 } 00:08:19.279 ] 00:08:19.279 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2314729 00:08:19.279 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:19.279 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:19.279 Running I/O for 10 seconds... 00:08:20.218 Latency(us) 00:08:20.218 [2024-12-07T10:20:19.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.218 Nvme0n1 : 1.00 16148.00 63.08 0.00 0.00 0.00 0.00 0.00 00:08:20.218 [2024-12-07T10:20:19.572Z] =================================================================================================================== 00:08:20.218 [2024-12-07T10:20:19.572Z] Total : 16148.00 63.08 0.00 0.00 0.00 0.00 0.00 00:08:20.218 00:08:21.158 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0b13e9cf-d743-46ce-b218-558b127a7005 00:08:21.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.158 Nvme0n1 : 2.00 16233.50 63.41 0.00 0.00 0.00 0.00 0.00 00:08:21.158 [2024-12-07T10:20:20.512Z] =================================================================================================================== 00:08:21.158 [2024-12-07T10:20:20.512Z] Total : 16233.50 63.41 0.00 0.00 0.00 0.00 0.00 00:08:21.158 00:08:21.419 true 00:08:21.419 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:21.419 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b13e9cf-d743-46ce-b218-558b127a7005 00:08:21.419 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:21.419 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:21.419 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2314729 00:08:22.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.360 Nvme0n1 : 3.00 16284.00 63.61 0.00 0.00 0.00 0.00 0.00 00:08:22.360 [2024-12-07T10:20:21.714Z] =================================================================================================================== 00:08:22.360 [2024-12-07T10:20:21.714Z] Total : 16284.00 63.61 0.00 0.00 0.00 0.00 0.00 00:08:22.360 00:08:23.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.372 Nvme0n1 : 4.00 16329.75 63.79 0.00 0.00 0.00 0.00 0.00 00:08:23.372 [2024-12-07T10:20:22.726Z] =================================================================================================================== 00:08:23.372 [2024-12-07T10:20:22.726Z] Total : 16329.75 63.79 0.00 0.00 0.00 0.00 0.00 00:08:23.372 00:08:24.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.333 Nvme0n1 : 5.00 16342.20 63.84 0.00 0.00 0.00 0.00 0.00 00:08:24.333 [2024-12-07T10:20:23.687Z] =================================================================================================================== 00:08:24.333 [2024-12-07T10:20:23.687Z] Total : 16342.20 63.84 0.00 0.00 0.00 0.00 0.00 00:08:24.333 00:08:25.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.271 Nvme0n1 : 6.00 16373.17 63.96 0.00 0.00 0.00 0.00 0.00 00:08:25.271 [2024-12-07T10:20:24.625Z] =================================================================================================================== 00:08:25.271 [2024-12-07T10:20:24.625Z] Total : 16373.17 63.96 0.00 0.00 0.00 0.00 0.00 00:08:25.271 00:08:26.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.219 Nvme0n1 : 7.00 16386.57 64.01 0.00 0.00 0.00 0.00 0.00 00:08:26.219 [2024-12-07T10:20:25.573Z] =================================================================================================================== 00:08:26.219 [2024-12-07T10:20:25.573Z] Total : 16386.57 64.01 0.00 0.00 0.00 0.00 0.00 00:08:26.219 00:08:27.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.156 Nvme0n1 : 8.00 16403.50 64.08 0.00 0.00 0.00 0.00 0.00 00:08:27.156 [2024-12-07T10:20:26.510Z] =================================================================================================================== 00:08:27.156 [2024-12-07T10:20:26.510Z] Total : 16403.50 64.08 0.00 0.00 0.00 0.00 0.00 00:08:27.156 00:08:28.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.540 Nvme0n1 : 9.00 16424.89 64.16 0.00 0.00 0.00 0.00 0.00 00:08:28.540 [2024-12-07T10:20:27.894Z] =================================================================================================================== 00:08:28.540 [2024-12-07T10:20:27.894Z] Total : 16424.89 64.16 0.00 0.00 0.00 0.00 0.00 00:08:28.540 00:08:29.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.505 Nvme0n1 : 10.00 16429.20 64.18 0.00 0.00 0.00 0.00 0.00 00:08:29.505 [2024-12-07T10:20:28.859Z] =================================================================================================================== 00:08:29.505 [2024-12-07T10:20:28.859Z] Total : 16429.20 64.18 0.00 0.00 0.00 0.00 0.00 00:08:29.505 00:08:29.505 00:08:29.505 Latency(us) 00:08:29.505 [2024-12-07T10:20:28.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.505 Nvme0n1 : 10.00 16437.56 64.21 0.00 0.00 7783.00 2484.91 15291.73 00:08:29.505 [2024-12-07T10:20:28.859Z] =================================================================================================================== 00:08:29.505 [2024-12-07T10:20:28.859Z] Total : 16437.56 64.21 0.00 0.00 7783.00 2484.91 15291.73 00:08:29.505 { 00:08:29.505 "results": [ 00:08:29.505 { 00:08:29.505 "job": "Nvme0n1", 00:08:29.505 "core_mask": "0x2", 00:08:29.505 "workload": "randwrite", 00:08:29.505 "status": "finished", 00:08:29.505 "queue_depth": 128, 00:08:29.505 "io_size": 4096, 00:08:29.505 "runtime": 10.002703, 00:08:29.505 "iops": 16437.556928362264, 00:08:29.505 "mibps": 64.2092067514151, 00:08:29.505 "io_failed": 0, 00:08:29.505 "io_timeout": 0, 00:08:29.505 "avg_latency_us": 7783.004900295991, 00:08:29.505 "min_latency_us": 2484.9066666666668, 00:08:29.505 "max_latency_us": 15291.733333333334 00:08:29.505 } 00:08:29.505 ], 00:08:29.505 "core_count": 1 00:08:29.505 } 00:08:29.505 11:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2314478 00:08:29.505 11:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2314478 ']' 00:08:29.505 11:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2314478 00:08:29.505 11:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:29.505 11:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.505 11:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2314478 00:08:29.505 11:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:29.505 11:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:29.505 11:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2314478' 00:08:29.505 killing process with pid 2314478 00:08:29.505 11:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2314478 00:08:29.505 Received shutdown signal, test time was about 10.000000 seconds 00:08:29.505 00:08:29.505 Latency(us) 00:08:29.505 [2024-12-07T10:20:28.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.506 [2024-12-07T10:20:28.860Z] =================================================================================================================== 00:08:29.506 [2024-12-07T10:20:28.860Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:29.506 11:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2314478 00:08:29.766 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.027 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:30.027 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b13e9cf-d743-46ce-b218-558b127a7005 00:08:30.027 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2310629 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2310629 00:08:30.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2310629 Killed "${NVMF_APP[@]}" "$@" 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2316857 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2316857 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2316857 ']' 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:30.287 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:30.547 [2024-12-07 11:20:29.679195] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:30.547 [2024-12-07 11:20:29.679305] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.547 [2024-12-07 11:20:29.826634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.807 [2024-12-07 11:20:29.923138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.808 [2024-12-07 11:20:29.923183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.808 [2024-12-07 11:20:29.923195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.808 [2024-12-07 11:20:29.923206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.808 [2024-12-07 11:20:29.923217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.808 [2024-12-07 11:20:29.924405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.068 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.069 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:31.069 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:31.069 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:31.069 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:31.329 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.329 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.329 [2024-12-07 11:20:30.608671] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:31.329 [2024-12-07 11:20:30.608822] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:31.329 [2024-12-07 11:20:30.608866] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:31.329 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:31.329 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d809beff-57ff-4e1d-b356-2a3325d19bc1 00:08:31.329 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d809beff-57ff-4e1d-b356-2a3325d19bc1 00:08:31.329 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.329 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:31.329 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.329 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.329 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:31.591 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d809beff-57ff-4e1d-b356-2a3325d19bc1 -t 2000 00:08:31.851 [ 00:08:31.851 { 00:08:31.851 "name": "d809beff-57ff-4e1d-b356-2a3325d19bc1", 00:08:31.851 "aliases": [ 00:08:31.851 "lvs/lvol" 00:08:31.851 ], 00:08:31.851 "product_name": "Logical Volume", 00:08:31.851 "block_size": 4096, 00:08:31.851 "num_blocks": 38912, 00:08:31.851 "uuid": "d809beff-57ff-4e1d-b356-2a3325d19bc1", 00:08:31.851 "assigned_rate_limits": { 00:08:31.851 "rw_ios_per_sec": 0, 00:08:31.851 "rw_mbytes_per_sec": 0, 00:08:31.851 "r_mbytes_per_sec": 0, 00:08:31.851 "w_mbytes_per_sec": 0 00:08:31.851 }, 00:08:31.851 "claimed": false, 00:08:31.851 "zoned": false, 00:08:31.851 "supported_io_types": { 00:08:31.851 "read": true, 00:08:31.851 "write": true, 00:08:31.851 "unmap": true, 00:08:31.851 "flush": false, 00:08:31.851 "reset": true, 00:08:31.851 "nvme_admin": false, 00:08:31.851 "nvme_io": false, 00:08:31.851 "nvme_io_md": false, 00:08:31.851 "write_zeroes": true, 00:08:31.851 "zcopy": false, 00:08:31.851 "get_zone_info": false, 00:08:31.851 "zone_management": false, 00:08:31.851 "zone_append": false, 00:08:31.851 "compare": false, 00:08:31.851 "compare_and_write": false, 00:08:31.851 "abort": false, 00:08:31.851 "seek_hole": true, 00:08:31.851 "seek_data": true, 00:08:31.851 "copy": false, 00:08:31.851 "nvme_iov_md": false 00:08:31.851 }, 00:08:31.851 "driver_specific": { 00:08:31.851 "lvol": { 00:08:31.851 "lvol_store_uuid": "0b13e9cf-d743-46ce-b218-558b127a7005", 00:08:31.851 "base_bdev": "aio_bdev", 00:08:31.851 "thin_provision": false, 00:08:31.851 "num_allocated_clusters": 38, 00:08:31.851 "snapshot": false, 00:08:31.851 "clone": false, 00:08:31.851 "esnap_clone": false 00:08:31.851 } 00:08:31.851 } 00:08:31.851 } 00:08:31.851 ] 00:08:31.851 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:31.851 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b13e9cf-d743-46ce-b218-558b127a7005 00:08:31.851 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:31.851 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:31.851 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b13e9cf-d743-46ce-b218-558b127a7005 00:08:31.851 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:32.135 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:32.135 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.135 [2024-12-07 11:20:31.448393] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:32.135 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b13e9cf-d743-46ce-b218-558b127a7005 00:08:32.135 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:32.135 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b13e9cf-d743-46ce-b218-558b127a7005 00:08:32.135 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.135 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.136 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.136 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.136 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.136 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.136 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.136 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:32.136 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b13e9cf-d743-46ce-b218-558b127a7005 00:08:32.396 request: 00:08:32.396 { 00:08:32.396 "uuid": "0b13e9cf-d743-46ce-b218-558b127a7005", 00:08:32.396 "method": "bdev_lvol_get_lvstores", 00:08:32.396 "req_id": 1 00:08:32.396 } 00:08:32.396 Got JSON-RPC error response 00:08:32.396 response: 00:08:32.396 { 00:08:32.396 "code": -19, 00:08:32.396 "message": "No such device" 00:08:32.396 } 00:08:32.396 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:32.396 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.396 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.396 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.396 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.656 aio_bdev 00:08:32.656 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d809beff-57ff-4e1d-b356-2a3325d19bc1 00:08:32.656 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d809beff-57ff-4e1d-b356-2a3325d19bc1 00:08:32.656 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.656 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:32.656 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.656 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.656 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:32.656 11:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d809beff-57ff-4e1d-b356-2a3325d19bc1 -t 2000 00:08:32.916 [ 00:08:32.916 { 00:08:32.916 "name": "d809beff-57ff-4e1d-b356-2a3325d19bc1", 00:08:32.916 "aliases": [ 00:08:32.916 "lvs/lvol" 00:08:32.916 ], 00:08:32.916 "product_name": "Logical Volume", 00:08:32.916 "block_size": 4096, 00:08:32.916 "num_blocks": 38912, 00:08:32.916 "uuid": "d809beff-57ff-4e1d-b356-2a3325d19bc1", 00:08:32.916 "assigned_rate_limits": { 00:08:32.916 "rw_ios_per_sec": 0, 00:08:32.916 "rw_mbytes_per_sec": 0, 00:08:32.916 "r_mbytes_per_sec": 0, 00:08:32.916 "w_mbytes_per_sec": 0 00:08:32.916 }, 00:08:32.916 "claimed": false, 00:08:32.916 "zoned": false, 00:08:32.916 "supported_io_types": { 00:08:32.916 "read": true, 00:08:32.916 "write": true, 00:08:32.916 "unmap": true, 00:08:32.916 "flush": false, 00:08:32.916 "reset": true, 00:08:32.916 "nvme_admin": false, 00:08:32.916 "nvme_io": false, 00:08:32.916 "nvme_io_md": false, 00:08:32.916 "write_zeroes": true, 00:08:32.916 "zcopy": false, 00:08:32.916 "get_zone_info": false, 00:08:32.916 "zone_management": false, 00:08:32.916 "zone_append": false, 00:08:32.916 "compare": false, 00:08:32.916 "compare_and_write": false, 00:08:32.916 "abort": false, 00:08:32.916 "seek_hole": true, 00:08:32.916 "seek_data": true, 00:08:32.916 "copy": false, 00:08:32.916 "nvme_iov_md": false 00:08:32.916 }, 00:08:32.917 "driver_specific": { 00:08:32.917 "lvol": { 00:08:32.917 "lvol_store_uuid": "0b13e9cf-d743-46ce-b218-558b127a7005", 00:08:32.917 "base_bdev": "aio_bdev", 00:08:32.917 "thin_provision": false, 00:08:32.917 "num_allocated_clusters": 38, 00:08:32.917 "snapshot": false, 00:08:32.917 "clone": false, 00:08:32.917 "esnap_clone": false 00:08:32.917 } 00:08:32.917 } 00:08:32.917 } 00:08:32.917 ] 00:08:32.917 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:32.917 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b13e9cf-d743-46ce-b218-558b127a7005 00:08:32.917 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:33.177 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:33.177 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b13e9cf-d743-46ce-b218-558b127a7005 00:08:33.177 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:33.177 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:33.177 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d809beff-57ff-4e1d-b356-2a3325d19bc1 00:08:33.437 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0b13e9cf-d743-46ce-b218-558b127a7005 00:08:33.437 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:33.697 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:33.697 00:08:33.697 real 0m17.736s 00:08:33.697 user 0m46.450s 00:08:33.697 sys 0m3.029s 00:08:33.697 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.697 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:33.697 ************************************ 00:08:33.697 END TEST lvs_grow_dirty 00:08:33.697 ************************************ 00:08:33.697 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:33.697 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:33.697 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:33.697 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:33.697 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:33.697 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:33.697 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:33.697 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:33.697 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:33.697 nvmf_trace.0 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.958 rmmod nvme_tcp 00:08:33.958 rmmod nvme_fabrics 00:08:33.958 rmmod nvme_keyring 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2316857 ']' 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2316857 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2316857 ']' 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2316857 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2316857 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2316857' 00:08:33.958 killing process with pid 2316857 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2316857 00:08:33.958 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2316857 00:08:34.898 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.898 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:34.898 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:34.898 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:34.898 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:34.898 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:34.898 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:34.898 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.898 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.898 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.898 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.898 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.808 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:36.808 00:08:36.808 real 0m45.746s 00:08:36.808 user 1m8.883s 00:08:36.808 sys 0m10.581s 00:08:36.808 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.808 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:36.808 ************************************ 00:08:36.808 END TEST nvmf_lvs_grow 00:08:36.808 ************************************ 00:08:36.808 11:20:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:36.808 11:20:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.808 11:20:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.808 11:20:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.069 ************************************ 00:08:37.069 START TEST nvmf_bdev_io_wait 00:08:37.069 ************************************ 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:37.069 * Looking for test storage... 00:08:37.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.069 --rc genhtml_branch_coverage=1 00:08:37.069 --rc genhtml_function_coverage=1 00:08:37.069 --rc genhtml_legend=1 00:08:37.069 --rc geninfo_all_blocks=1 00:08:37.069 --rc geninfo_unexecuted_blocks=1 00:08:37.069 00:08:37.069 ' 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.069 --rc genhtml_branch_coverage=1 00:08:37.069 --rc genhtml_function_coverage=1 00:08:37.069 --rc genhtml_legend=1 00:08:37.069 --rc geninfo_all_blocks=1 00:08:37.069 --rc geninfo_unexecuted_blocks=1 00:08:37.069 00:08:37.069 ' 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.069 --rc genhtml_branch_coverage=1 00:08:37.069 --rc genhtml_function_coverage=1 00:08:37.069 --rc genhtml_legend=1 00:08:37.069 --rc geninfo_all_blocks=1 00:08:37.069 --rc geninfo_unexecuted_blocks=1 00:08:37.069 00:08:37.069 ' 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.069 --rc genhtml_branch_coverage=1 00:08:37.069 --rc genhtml_function_coverage=1 00:08:37.069 --rc genhtml_legend=1 00:08:37.069 --rc geninfo_all_blocks=1 00:08:37.069 --rc geninfo_unexecuted_blocks=1 00:08:37.069 00:08:37.069 ' 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:37.069 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.070 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:45.204 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:45.204 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:45.204 Found net devices under 0000:31:00.0: cvl_0_0 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:45.204 Found net devices under 0000:31:00.1: cvl_0_1 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.204 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:08:45.204 00:08:45.205 --- 10.0.0.2 ping statistics --- 00:08:45.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.205 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:08:45.205 00:08:45.205 --- 10.0.0.1 ping statistics --- 00:08:45.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.205 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2322033 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2322033 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2322033 ']' 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.205 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.205 [2024-12-07 11:20:43.930198] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:45.205 [2024-12-07 11:20:43.930325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.205 [2024-12-07 11:20:44.080106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.205 [2024-12-07 11:20:44.185733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.205 [2024-12-07 11:20:44.185777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.205 [2024-12-07 11:20:44.185789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.205 [2024-12-07 11:20:44.185800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.205 [2024-12-07 11:20:44.185809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.205 [2024-12-07 11:20:44.188050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.205 [2024-12-07 11:20:44.188134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.205 [2024-12-07 11:20:44.188366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.205 [2024-12-07 11:20:44.188386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.465 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.465 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:45.465 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.465 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.465 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.465 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.465 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:45.465 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.465 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.465 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.465 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:45.465 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.465 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.726 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.726 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.726 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.726 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.726 [2024-12-07 11:20:44.931302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.726 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.726 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:45.726 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.726 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.726 Malloc0 00:08:45.726 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.726 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:45.726 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.726 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.726 [2024-12-07 11:20:45.029677] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2322353 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2322355 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.726 { 00:08:45.726 "params": { 00:08:45.726 "name": "Nvme$subsystem", 00:08:45.726 "trtype": "$TEST_TRANSPORT", 00:08:45.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.726 "adrfam": "ipv4", 00:08:45.726 "trsvcid": "$NVMF_PORT", 00:08:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.726 "hdgst": ${hdgst:-false}, 00:08:45.726 "ddgst": ${ddgst:-false} 00:08:45.726 }, 00:08:45.726 "method": "bdev_nvme_attach_controller" 00:08:45.726 } 00:08:45.726 EOF 00:08:45.726 )") 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2322357 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.726 { 00:08:45.726 "params": { 00:08:45.726 "name": "Nvme$subsystem", 00:08:45.726 "trtype": "$TEST_TRANSPORT", 00:08:45.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.726 "adrfam": "ipv4", 00:08:45.726 "trsvcid": "$NVMF_PORT", 00:08:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.726 "hdgst": ${hdgst:-false}, 00:08:45.726 "ddgst": ${ddgst:-false} 00:08:45.726 }, 00:08:45.726 "method": "bdev_nvme_attach_controller" 00:08:45.726 } 00:08:45.726 EOF 00:08:45.726 )") 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2322360 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.726 { 00:08:45.726 "params": { 00:08:45.726 "name": "Nvme$subsystem", 00:08:45.726 "trtype": "$TEST_TRANSPORT", 00:08:45.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.726 "adrfam": "ipv4", 00:08:45.726 "trsvcid": "$NVMF_PORT", 00:08:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.726 "hdgst": ${hdgst:-false}, 00:08:45.726 "ddgst": ${ddgst:-false} 00:08:45.726 }, 00:08:45.726 "method": "bdev_nvme_attach_controller" 00:08:45.726 } 00:08:45.726 EOF 00:08:45.726 )") 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.726 { 00:08:45.726 "params": { 00:08:45.726 "name": "Nvme$subsystem", 00:08:45.726 "trtype": "$TEST_TRANSPORT", 00:08:45.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.726 "adrfam": "ipv4", 00:08:45.726 "trsvcid": "$NVMF_PORT", 00:08:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.726 "hdgst": ${hdgst:-false}, 00:08:45.726 "ddgst": ${ddgst:-false} 00:08:45.726 }, 00:08:45.726 "method": "bdev_nvme_attach_controller" 00:08:45.726 } 00:08:45.726 EOF 00:08:45.726 )") 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2322353 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.726 "params": { 00:08:45.726 "name": "Nvme1", 00:08:45.726 "trtype": "tcp", 00:08:45.726 "traddr": "10.0.0.2", 00:08:45.726 "adrfam": "ipv4", 00:08:45.726 "trsvcid": "4420", 00:08:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.726 "hdgst": false, 00:08:45.726 "ddgst": false 00:08:45.726 }, 00:08:45.726 "method": "bdev_nvme_attach_controller" 00:08:45.726 }' 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.726 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.726 "params": { 00:08:45.726 "name": "Nvme1", 00:08:45.726 "trtype": "tcp", 00:08:45.726 "traddr": "10.0.0.2", 00:08:45.726 "adrfam": "ipv4", 00:08:45.726 "trsvcid": "4420", 00:08:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.726 "hdgst": false, 00:08:45.727 "ddgst": false 00:08:45.727 }, 00:08:45.727 "method": "bdev_nvme_attach_controller" 00:08:45.727 }' 00:08:45.727 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.727 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.727 "params": { 00:08:45.727 "name": "Nvme1", 00:08:45.727 "trtype": "tcp", 00:08:45.727 "traddr": "10.0.0.2", 00:08:45.727 "adrfam": "ipv4", 00:08:45.727 "trsvcid": "4420", 00:08:45.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.727 "hdgst": false, 00:08:45.727 "ddgst": false 00:08:45.727 }, 00:08:45.727 "method": "bdev_nvme_attach_controller" 00:08:45.727 }' 00:08:45.727 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.727 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.727 "params": { 00:08:45.727 "name": "Nvme1", 00:08:45.727 "trtype": "tcp", 00:08:45.727 "traddr": "10.0.0.2", 00:08:45.727 "adrfam": "ipv4", 00:08:45.727 "trsvcid": "4420", 00:08:45.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.727 "hdgst": false, 00:08:45.727 "ddgst": false 00:08:45.727 }, 00:08:45.727 "method": "bdev_nvme_attach_controller" 00:08:45.727 }' 00:08:45.987 [2024-12-07 11:20:45.113557] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:45.987 [2024-12-07 11:20:45.113669] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:45.987 [2024-12-07 11:20:45.115282] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:45.987 [2024-12-07 11:20:45.115303] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:45.987 [2024-12-07 11:20:45.115379] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:45.987 [2024-12-07 11:20:45.115397] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:45.987 [2024-12-07 11:20:45.116201] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:45.987 [2024-12-07 11:20:45.116295] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:45.987 [2024-12-07 11:20:45.317882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.246 [2024-12-07 11:20:45.377356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.246 [2024-12-07 11:20:45.431286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:46.246 [2024-12-07 11:20:45.441349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.246 [2024-12-07 11:20:45.471467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:46.246 [2024-12-07 11:20:45.486733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.246 [2024-12-07 11:20:45.537476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:46.246 [2024-12-07 11:20:45.579576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:46.506 Running I/O for 1 seconds... 00:08:46.766 Running I/O for 1 seconds... 00:08:46.766 Running I/O for 1 seconds... 00:08:46.766 Running I/O for 1 seconds... 00:08:47.708 7467.00 IOPS, 29.17 MiB/s 00:08:47.708 Latency(us) 00:08:47.708 [2024-12-07T10:20:47.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.708 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:47.708 Nvme1n1 : 1.02 7475.36 29.20 0.00 0.00 16968.85 7482.03 24685.23 00:08:47.708 [2024-12-07T10:20:47.062Z] =================================================================================================================== 00:08:47.708 [2024-12-07T10:20:47.062Z] Total : 7475.36 29.20 0.00 0.00 16968.85 7482.03 24685.23 00:08:47.708 7551.00 IOPS, 29.50 MiB/s 00:08:47.708 Latency(us) 00:08:47.708 [2024-12-07T10:20:47.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.708 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:47.708 Nvme1n1 : 1.01 7672.65 29.97 0.00 0.00 16633.14 4314.45 34078.72 00:08:47.708 [2024-12-07T10:20:47.062Z] =================================================================================================================== 00:08:47.708 [2024-12-07T10:20:47.062Z] Total : 7672.65 29.97 0.00 0.00 16633.14 4314.45 34078.72 00:08:47.708 16423.00 IOPS, 64.15 MiB/s 00:08:47.708 Latency(us) 00:08:47.708 [2024-12-07T10:20:47.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.708 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:47.708 Nvme1n1 : 1.01 16490.77 64.42 0.00 0.00 7739.70 3549.87 16056.32 00:08:47.708 [2024-12-07T10:20:47.062Z] =================================================================================================================== 00:08:47.708 [2024-12-07T10:20:47.062Z] Total : 16490.77 64.42 0.00 0.00 7739.70 3549.87 16056.32 00:08:47.708 166408.00 IOPS, 650.03 MiB/s 00:08:47.708 Latency(us) 00:08:47.708 [2024-12-07T10:20:47.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.708 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:47.708 Nvme1n1 : 1.00 166058.48 648.67 0.00 0.00 766.66 341.33 2075.31 00:08:47.708 [2024-12-07T10:20:47.062Z] =================================================================================================================== 00:08:47.708 [2024-12-07T10:20:47.062Z] Total : 166058.48 648.67 0.00 0.00 766.66 341.33 2075.31 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2322355 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2322357 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2322360 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.280 rmmod nvme_tcp 00:08:48.280 rmmod nvme_fabrics 00:08:48.280 rmmod nvme_keyring 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2322033 ']' 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2322033 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2322033 ']' 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2322033 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2322033 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2322033' 00:08:48.280 killing process with pid 2322033 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2322033 00:08:48.280 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2322033 00:08:49.223 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:49.223 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:49.223 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:49.223 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:49.223 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.223 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:49.223 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.223 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.223 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.223 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.223 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.223 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.134 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.134 00:08:51.134 real 0m14.233s 00:08:51.134 user 0m25.500s 00:08:51.134 sys 0m7.389s 00:08:51.134 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.134 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.134 ************************************ 00:08:51.134 END TEST nvmf_bdev_io_wait 00:08:51.134 ************************************ 00:08:51.134 11:20:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:51.134 11:20:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:51.134 11:20:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.134 11:20:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.396 ************************************ 00:08:51.396 START TEST nvmf_queue_depth 00:08:51.396 ************************************ 00:08:51.396 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:51.396 * Looking for test storage... 00:08:51.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.396 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:51.396 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:51.396 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:51.396 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:51.396 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.396 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.396 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.396 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.396 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.396 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:51.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.397 --rc genhtml_branch_coverage=1 00:08:51.397 --rc genhtml_function_coverage=1 00:08:51.397 --rc genhtml_legend=1 00:08:51.397 --rc geninfo_all_blocks=1 00:08:51.397 --rc geninfo_unexecuted_blocks=1 00:08:51.397 00:08:51.397 ' 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:51.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.397 --rc genhtml_branch_coverage=1 00:08:51.397 --rc genhtml_function_coverage=1 00:08:51.397 --rc genhtml_legend=1 00:08:51.397 --rc geninfo_all_blocks=1 00:08:51.397 --rc geninfo_unexecuted_blocks=1 00:08:51.397 00:08:51.397 ' 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:51.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.397 --rc genhtml_branch_coverage=1 00:08:51.397 --rc genhtml_function_coverage=1 00:08:51.397 --rc genhtml_legend=1 00:08:51.397 --rc geninfo_all_blocks=1 00:08:51.397 --rc geninfo_unexecuted_blocks=1 00:08:51.397 00:08:51.397 ' 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:51.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.397 --rc genhtml_branch_coverage=1 00:08:51.397 --rc genhtml_function_coverage=1 00:08:51.397 --rc genhtml_legend=1 00:08:51.397 --rc geninfo_all_blocks=1 00:08:51.397 --rc geninfo_unexecuted_blocks=1 00:08:51.397 00:08:51.397 ' 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:51.397 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:51.398 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.398 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.398 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.398 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:51.398 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:51.398 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:51.398 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:59.534 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:59.534 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.534 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:59.535 Found net devices under 0000:31:00.0: cvl_0_0 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:59.535 Found net devices under 0000:31:00.1: cvl_0_1 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.535 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:08:59.535 00:08:59.535 --- 10.0.0.2 ping statistics --- 00:08:59.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.535 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:08:59.535 00:08:59.535 --- 10.0.0.1 ping statistics --- 00:08:59.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.535 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2327324 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2327324 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2327324 ']' 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.535 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.535 [2024-12-07 11:20:58.192754] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:59.535 [2024-12-07 11:20:58.192880] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.535 [2024-12-07 11:20:58.362008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.535 [2024-12-07 11:20:58.484992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.535 [2024-12-07 11:20:58.485071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.535 [2024-12-07 11:20:58.485084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.535 [2024-12-07 11:20:58.485097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.535 [2024-12-07 11:20:58.485111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.535 [2024-12-07 11:20:58.486619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.796 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.796 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:59.796 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:59.796 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:59.796 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.796 [2024-12-07 11:20:59.018314] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.796 Malloc0 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.796 [2024-12-07 11:20:59.115335] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2327479 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2327479 /var/tmp/bdevperf.sock 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2327479 ']' 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:59.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.796 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.056 [2024-12-07 11:20:59.214768] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:00.056 [2024-12-07 11:20:59.214883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2327479 ] 00:09:00.056 [2024-12-07 11:20:59.354293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.316 [2024-12-07 11:20:59.453784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.885 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.885 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:00.885 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:00.885 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.885 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.885 NVMe0n1 00:09:00.885 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.885 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:01.145 Running I/O for 10 seconds... 00:09:03.048 7924.00 IOPS, 30.95 MiB/s [2024-12-07T10:21:03.343Z] 8154.50 IOPS, 31.85 MiB/s [2024-12-07T10:21:04.367Z] 8746.00 IOPS, 34.16 MiB/s [2024-12-07T10:21:05.314Z] 9216.00 IOPS, 36.00 MiB/s [2024-12-07T10:21:06.698Z] 9421.60 IOPS, 36.80 MiB/s [2024-12-07T10:21:07.641Z] 9559.67 IOPS, 37.34 MiB/s [2024-12-07T10:21:08.584Z] 9713.14 IOPS, 37.94 MiB/s [2024-12-07T10:21:09.525Z] 9848.50 IOPS, 38.47 MiB/s [2024-12-07T10:21:10.468Z] 9900.22 IOPS, 38.67 MiB/s [2024-12-07T10:21:10.468Z] 9985.50 IOPS, 39.01 MiB/s 00:09:11.114 Latency(us) 00:09:11.114 [2024-12-07T10:21:10.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.114 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:11.114 Verification LBA range: start 0x0 length 0x4000 00:09:11.114 NVMe0n1 : 10.06 10013.74 39.12 0.00 0.00 101813.57 13216.43 83449.17 00:09:11.114 [2024-12-07T10:21:10.468Z] =================================================================================================================== 00:09:11.114 [2024-12-07T10:21:10.468Z] Total : 10013.74 39.12 0.00 0.00 101813.57 13216.43 83449.17 00:09:11.114 { 00:09:11.114 "results": [ 00:09:11.114 { 00:09:11.114 "job": "NVMe0n1", 00:09:11.114 "core_mask": "0x1", 00:09:11.114 "workload": "verify", 00:09:11.114 "status": "finished", 00:09:11.114 "verify_range": { 00:09:11.114 "start": 0, 00:09:11.114 "length": 16384 00:09:11.114 }, 00:09:11.114 "queue_depth": 1024, 00:09:11.114 "io_size": 4096, 00:09:11.114 "runtime": 10.05748, 00:09:11.114 "iops": 10013.741016636373, 00:09:11.114 "mibps": 39.11617584623583, 00:09:11.114 "io_failed": 0, 00:09:11.114 "io_timeout": 0, 00:09:11.114 "avg_latency_us": 101813.56728340266, 00:09:11.114 "min_latency_us": 13216.426666666666, 00:09:11.114 "max_latency_us": 83449.17333333334 00:09:11.114 } 00:09:11.114 ], 00:09:11.114 "core_count": 1 00:09:11.114 } 00:09:11.114 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2327479 00:09:11.114 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2327479 ']' 00:09:11.114 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2327479 00:09:11.114 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:11.114 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.114 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2327479 00:09:11.376 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.376 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.376 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2327479' 00:09:11.376 killing process with pid 2327479 00:09:11.376 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2327479 00:09:11.376 Received shutdown signal, test time was about 10.000000 seconds 00:09:11.376 00:09:11.376 Latency(us) 00:09:11.376 [2024-12-07T10:21:10.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.376 [2024-12-07T10:21:10.730Z] =================================================================================================================== 00:09:11.376 [2024-12-07T10:21:10.730Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:11.376 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2327479 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.945 rmmod nvme_tcp 00:09:11.945 rmmod nvme_fabrics 00:09:11.945 rmmod nvme_keyring 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2327324 ']' 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2327324 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2327324 ']' 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2327324 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2327324 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2327324' 00:09:11.945 killing process with pid 2327324 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2327324 00:09:11.945 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2327324 00:09:12.515 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:12.515 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:12.515 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:12.515 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:12.515 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:12.515 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:12.515 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:12.776 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.776 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:12.776 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.776 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.776 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.690 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:14.690 00:09:14.690 real 0m23.449s 00:09:14.690 user 0m27.237s 00:09:14.690 sys 0m7.012s 00:09:14.690 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.690 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.690 ************************************ 00:09:14.690 END TEST nvmf_queue_depth 00:09:14.690 ************************************ 00:09:14.690 11:21:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:14.690 11:21:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.690 11:21:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.690 11:21:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.690 ************************************ 00:09:14.690 START TEST nvmf_target_multipath 00:09:14.690 ************************************ 00:09:14.690 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:14.952 * Looking for test storage... 00:09:14.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.952 --rc genhtml_branch_coverage=1 00:09:14.952 --rc genhtml_function_coverage=1 00:09:14.952 --rc genhtml_legend=1 00:09:14.952 --rc geninfo_all_blocks=1 00:09:14.952 --rc geninfo_unexecuted_blocks=1 00:09:14.952 00:09:14.952 ' 00:09:14.952 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.952 --rc genhtml_branch_coverage=1 00:09:14.952 --rc genhtml_function_coverage=1 00:09:14.953 --rc genhtml_legend=1 00:09:14.953 --rc geninfo_all_blocks=1 00:09:14.953 --rc geninfo_unexecuted_blocks=1 00:09:14.953 00:09:14.953 ' 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.953 --rc genhtml_branch_coverage=1 00:09:14.953 --rc genhtml_function_coverage=1 00:09:14.953 --rc genhtml_legend=1 00:09:14.953 --rc geninfo_all_blocks=1 00:09:14.953 --rc geninfo_unexecuted_blocks=1 00:09:14.953 00:09:14.953 ' 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.953 --rc genhtml_branch_coverage=1 00:09:14.953 --rc genhtml_function_coverage=1 00:09:14.953 --rc genhtml_legend=1 00:09:14.953 --rc geninfo_all_blocks=1 00:09:14.953 --rc geninfo_unexecuted_blocks=1 00:09:14.953 00:09:14.953 ' 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.953 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:23.094 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:23.095 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:23.095 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:23.095 Found net devices under 0000:31:00.0: cvl_0_0 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:23.095 Found net devices under 0000:31:00.1: cvl_0_1 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:23.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:09:23.095 00:09:23.095 --- 10.0.0.2 ping statistics --- 00:09:23.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.095 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:09:23.095 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:09:23.095 00:09:23.095 --- 10.0.0.1 ping statistics --- 00:09:23.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.096 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:23.096 only one NIC for nvmf test 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.096 rmmod nvme_tcp 00:09:23.096 rmmod nvme_fabrics 00:09:23.096 rmmod nvme_keyring 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.096 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:24.482 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.743 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.743 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.743 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.743 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.743 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.743 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.743 00:09:24.743 real 0m9.845s 00:09:24.743 user 0m2.112s 00:09:24.743 sys 0m5.666s 00:09:24.743 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.743 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:24.743 ************************************ 00:09:24.743 END TEST nvmf_target_multipath 00:09:24.743 ************************************ 00:09:24.743 11:21:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:24.743 11:21:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.743 11:21:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.743 11:21:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.743 ************************************ 00:09:24.743 START TEST nvmf_zcopy 00:09:24.743 ************************************ 00:09:24.743 11:21:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:24.743 * Looking for test storage... 00:09:24.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.743 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.743 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.743 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:25.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.004 --rc genhtml_branch_coverage=1 00:09:25.004 --rc genhtml_function_coverage=1 00:09:25.004 --rc genhtml_legend=1 00:09:25.004 --rc geninfo_all_blocks=1 00:09:25.004 --rc geninfo_unexecuted_blocks=1 00:09:25.004 00:09:25.004 ' 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:25.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.004 --rc genhtml_branch_coverage=1 00:09:25.004 --rc genhtml_function_coverage=1 00:09:25.004 --rc genhtml_legend=1 00:09:25.004 --rc geninfo_all_blocks=1 00:09:25.004 --rc geninfo_unexecuted_blocks=1 00:09:25.004 00:09:25.004 ' 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:25.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.004 --rc genhtml_branch_coverage=1 00:09:25.004 --rc genhtml_function_coverage=1 00:09:25.004 --rc genhtml_legend=1 00:09:25.004 --rc geninfo_all_blocks=1 00:09:25.004 --rc geninfo_unexecuted_blocks=1 00:09:25.004 00:09:25.004 ' 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:25.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.004 --rc genhtml_branch_coverage=1 00:09:25.004 --rc genhtml_function_coverage=1 00:09:25.004 --rc genhtml_legend=1 00:09:25.004 --rc geninfo_all_blocks=1 00:09:25.004 --rc geninfo_unexecuted_blocks=1 00:09:25.004 00:09:25.004 ' 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.004 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.005 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:33.150 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:33.150 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:33.150 Found net devices under 0000:31:00.0: cvl_0_0 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:33.150 Found net devices under 0000:31:00.1: cvl_0_1 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:09:33.150 00:09:33.150 --- 10.0.0.2 ping statistics --- 00:09:33.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.150 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:09:33.150 00:09:33.150 --- 10.0.0.1 ping statistics --- 00:09:33.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.150 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:09:33.150 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2339183 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2339183 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2339183 ']' 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.151 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.151 [2024-12-07 11:21:31.745630] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:33.151 [2024-12-07 11:21:31.745754] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.151 [2024-12-07 11:21:31.928215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.151 [2024-12-07 11:21:32.050644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.151 [2024-12-07 11:21:32.050706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.151 [2024-12-07 11:21:32.050719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.151 [2024-12-07 11:21:32.050733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.151 [2024-12-07 11:21:32.050746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.151 [2024-12-07 11:21:32.052241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.412 [2024-12-07 11:21:32.585107] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.412 [2024-12-07 11:21:32.609433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.412 malloc0 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:33.412 { 00:09:33.412 "params": { 00:09:33.412 "name": "Nvme$subsystem", 00:09:33.412 "trtype": "$TEST_TRANSPORT", 00:09:33.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.412 "adrfam": "ipv4", 00:09:33.412 "trsvcid": "$NVMF_PORT", 00:09:33.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.412 "hdgst": ${hdgst:-false}, 00:09:33.412 "ddgst": ${ddgst:-false} 00:09:33.412 }, 00:09:33.412 "method": "bdev_nvme_attach_controller" 00:09:33.412 } 00:09:33.412 EOF 00:09:33.412 )") 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:33.412 11:21:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:33.412 "params": { 00:09:33.413 "name": "Nvme1", 00:09:33.413 "trtype": "tcp", 00:09:33.413 "traddr": "10.0.0.2", 00:09:33.413 "adrfam": "ipv4", 00:09:33.413 "trsvcid": "4420", 00:09:33.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.413 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.413 "hdgst": false, 00:09:33.413 "ddgst": false 00:09:33.413 }, 00:09:33.413 "method": "bdev_nvme_attach_controller" 00:09:33.413 }' 00:09:33.673 [2024-12-07 11:21:32.768804] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:33.673 [2024-12-07 11:21:32.768925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2339298 ] 00:09:33.673 [2024-12-07 11:21:32.908866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.673 [2024-12-07 11:21:33.007950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.242 Running I/O for 10 seconds... 00:09:36.201 5822.00 IOPS, 45.48 MiB/s [2024-12-07T10:21:36.936Z] 5872.00 IOPS, 45.88 MiB/s [2024-12-07T10:21:37.876Z] 6558.33 IOPS, 51.24 MiB/s [2024-12-07T10:21:38.817Z] 7057.25 IOPS, 55.13 MiB/s [2024-12-07T10:21:39.758Z] 7359.40 IOPS, 57.50 MiB/s [2024-12-07T10:21:40.699Z] 7553.33 IOPS, 59.01 MiB/s [2024-12-07T10:21:41.638Z] 7694.14 IOPS, 60.11 MiB/s [2024-12-07T10:21:42.577Z] 7798.50 IOPS, 60.93 MiB/s [2024-12-07T10:21:43.959Z] 7881.67 IOPS, 61.58 MiB/s [2024-12-07T10:21:43.959Z] 7947.60 IOPS, 62.09 MiB/s 00:09:44.605 Latency(us) 00:09:44.605 [2024-12-07T10:21:43.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.605 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:44.606 Verification LBA range: start 0x0 length 0x1000 00:09:44.606 Nvme1n1 : 10.01 7948.92 62.10 0.00 0.00 16044.83 1392.64 30365.01 00:09:44.606 [2024-12-07T10:21:43.960Z] =================================================================================================================== 00:09:44.606 [2024-12-07T10:21:43.960Z] Total : 7948.92 62.10 0.00 0.00 16044.83 1392.64 30365.01 00:09:44.882 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2341558 00:09:44.882 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:44.882 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.882 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:44.882 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:44.882 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:44.882 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:44.882 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:44.882 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:44.882 { 00:09:44.882 "params": { 00:09:44.882 "name": "Nvme$subsystem", 00:09:44.882 "trtype": "$TEST_TRANSPORT", 00:09:44.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.882 "adrfam": "ipv4", 00:09:44.882 "trsvcid": "$NVMF_PORT", 00:09:44.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.882 "hdgst": ${hdgst:-false}, 00:09:44.882 "ddgst": ${ddgst:-false} 00:09:44.882 }, 00:09:44.882 "method": "bdev_nvme_attach_controller" 00:09:44.882 } 00:09:44.882 EOF 00:09:44.882 )") 00:09:44.882 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:44.882 [2024-12-07 11:21:44.197500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.882 [2024-12-07 11:21:44.197535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.882 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:44.883 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:44.883 11:21:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:44.883 "params": { 00:09:44.883 "name": "Nvme1", 00:09:44.883 "trtype": "tcp", 00:09:44.883 "traddr": "10.0.0.2", 00:09:44.883 "adrfam": "ipv4", 00:09:44.883 "trsvcid": "4420", 00:09:44.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.883 "hdgst": false, 00:09:44.883 "ddgst": false 00:09:44.883 }, 00:09:44.883 "method": "bdev_nvme_attach_controller" 00:09:44.883 }' 00:09:44.883 [2024-12-07 11:21:44.209493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.883 [2024-12-07 11:21:44.209517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.883 [2024-12-07 11:21:44.221496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.883 [2024-12-07 11:21:44.221513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.883 [2024-12-07 11:21:44.233541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.883 [2024-12-07 11:21:44.233558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.245571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.245589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.257588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.257605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.267570] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:45.144 [2024-12-07 11:21:44.267667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2341558 ] 00:09:45.144 [2024-12-07 11:21:44.269638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.269654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.281663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.281681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.293681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.293697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.305728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.305745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.317748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.317765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.329787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.329803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.341826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.341843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.353842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.353858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.365884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.365901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.377914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.377931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.389933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.389949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.392504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.144 [2024-12-07 11:21:44.401979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.401995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.414019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.414035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.426046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.426062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.438077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.438093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.450092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.450109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.462134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.462150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.474167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.474184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.486194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.144 [2024-12-07 11:21:44.486212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.144 [2024-12-07 11:21:44.490275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.405 [2024-12-07 11:21:44.498229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.498246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.510256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.510274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.522294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.522312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.534322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.534338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.546340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.546356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.558394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.558412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.570421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.570439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.582435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.582453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.594481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.594498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.606497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.606514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.618538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.618554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.630571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.630590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.642604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.642621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.654642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.654658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.666670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.666687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.678686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.678702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.690730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.690747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.702759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.702776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.714789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.714806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.726823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.726840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.405 [2024-12-07 11:21:44.738842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.405 [2024-12-07 11:21:44.738859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.406 [2024-12-07 11:21:44.750909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.406 [2024-12-07 11:21:44.750928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.762944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.762963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.774963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.774981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.787007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.787031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.799026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.799044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.811066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.811083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.823099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.823120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.835137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.835157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.847165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.847183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.859204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.859223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.871224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.871241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.883271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.883289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.895284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.895301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.907330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.907347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.919366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.919383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.666 [2024-12-07 11:21:44.931386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.666 [2024-12-07 11:21:44.931403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.667 [2024-12-07 11:21:44.943435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.667 [2024-12-07 11:21:44.943453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.667 Running I/O for 5 seconds... 00:09:45.667 [2024-12-07 11:21:44.959770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.667 [2024-12-07 11:21:44.959792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.667 [2024-12-07 11:21:44.975303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.667 [2024-12-07 11:21:44.975324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.667 [2024-12-07 11:21:44.989049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.667 [2024-12-07 11:21:44.989070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.667 [2024-12-07 11:21:45.002792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.667 [2024-12-07 11:21:45.002813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.667 [2024-12-07 11:21:45.016943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.667 [2024-12-07 11:21:45.016963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.929 [2024-12-07 11:21:45.027331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.929 [2024-12-07 11:21:45.027351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.929 [2024-12-07 11:21:45.041521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.929 [2024-12-07 11:21:45.041541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.929 [2024-12-07 11:21:45.054519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.929 [2024-12-07 11:21:45.054540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.929 [2024-12-07 11:21:45.068570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.929 [2024-12-07 11:21:45.068590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.929 [2024-12-07 11:21:45.079942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.929 [2024-12-07 11:21:45.079963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.929 [2024-12-07 11:21:45.093850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.093876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.930 [2024-12-07 11:21:45.107584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.107605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.930 [2024-12-07 11:21:45.121579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.121599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.930 [2024-12-07 11:21:45.135095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.135115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.930 [2024-12-07 11:21:45.148597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.148617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.930 [2024-12-07 11:21:45.162520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.162540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.930 [2024-12-07 11:21:45.176460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.176480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.930 [2024-12-07 11:21:45.189935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.189954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.930 [2024-12-07 11:21:45.203753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.203774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.930 [2024-12-07 11:21:45.217460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.217479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.930 [2024-12-07 11:21:45.230949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.230969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.930 [2024-12-07 11:21:45.244282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.244304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.930 [2024-12-07 11:21:45.257735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.257755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.930 [2024-12-07 11:21:45.271379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.930 [2024-12-07 11:21:45.271398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.285759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.285779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.301375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.301395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.315156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.315176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.328878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.328898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.342439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.342459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.355975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.355994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.369694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.369714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.383288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.383308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.396934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.396954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.411097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.411116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.422553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.422572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.436626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.436646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.450555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.450575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.464432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.464451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.478026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.478045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.491478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.491497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.505146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.505165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.519233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.519253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.191 [2024-12-07 11:21:45.532632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.191 [2024-12-07 11:21:45.532651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.546525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.546546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.559671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.559690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.573655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.573674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.587569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.587589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.601500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.601521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.615792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.615811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.631613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.631633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.645687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.645707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.659704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.659724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.671578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.671598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.685541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.685560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.699184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.699203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.713030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.713050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.726778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.726797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.740702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.740720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.754561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.754580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.768267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.768286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.782092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.782112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.451 [2024-12-07 11:21:45.793358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.451 [2024-12-07 11:21:45.793377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.807482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.807502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.821421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.821440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.832847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.832867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.847126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.847145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.858419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.858438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.872576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.872599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.886385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.886405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.899810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.899830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.913832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.913851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.927635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.927655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.939189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.939209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 17178.00 IOPS, 134.20 MiB/s [2024-12-07T10:21:46.066Z] [2024-12-07 11:21:45.953385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.953405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.966402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.966427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.981293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.981312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:45.996756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:45.996776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:46.010734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:46.010755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:46.023961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:46.023980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:46.037279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:46.037299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.712 [2024-12-07 11:21:46.051358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.712 [2024-12-07 11:21:46.051377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.979 [2024-12-07 11:21:46.065394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.979 [2024-12-07 11:21:46.065413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.979 [2024-12-07 11:21:46.079000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.979 [2024-12-07 11:21:46.079024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.979 [2024-12-07 11:21:46.092753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.979 [2024-12-07 11:21:46.092772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.979 [2024-12-07 11:21:46.107271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.979 [2024-12-07 11:21:46.107290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.979 [2024-12-07 11:21:46.122440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.979 [2024-12-07 11:21:46.122459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.979 [2024-12-07 11:21:46.136170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.979 [2024-12-07 11:21:46.136193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.979 [2024-12-07 11:21:46.149862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.979 [2024-12-07 11:21:46.149882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.979 [2024-12-07 11:21:46.163690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.979 [2024-12-07 11:21:46.163710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.979 [2024-12-07 11:21:46.177440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.979 [2024-12-07 11:21:46.177460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.979 [2024-12-07 11:21:46.190888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.979 [2024-12-07 11:21:46.190908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.979 [2024-12-07 11:21:46.204818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.979 [2024-12-07 11:21:46.204839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.979 [2024-12-07 11:21:46.216675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-12-07 11:21:46.216695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-12-07 11:21:46.230519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-12-07 11:21:46.230540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-12-07 11:21:46.243796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-12-07 11:21:46.243816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-12-07 11:21:46.257768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-12-07 11:21:46.257791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-12-07 11:21:46.271250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-12-07 11:21:46.271269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-12-07 11:21:46.285627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-12-07 11:21:46.285646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-12-07 11:21:46.301141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-12-07 11:21:46.301161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-12-07 11:21:46.315038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-12-07 11:21:46.315058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.365 [2024-12-07 11:21:46.329277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.365 [2024-12-07 11:21:46.329297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.365 [2024-12-07 11:21:46.340722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.365 [2024-12-07 11:21:46.340743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.354626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.354647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.368451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.368471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.382357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.382377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.396353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.396378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.412168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.412189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.426586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.426606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.442366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.442386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.456260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.456281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.469487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.469508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.483610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.483630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.497785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.497805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.509121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.509141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.523259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.523279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.536726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.536747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.550451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.550471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.564596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.564615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.580022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.580042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.593536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.593556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.607485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.607505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.618973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.618993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.632700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.632719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.646513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.646533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.660387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.660407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.366 [2024-12-07 11:21:46.674328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.366 [2024-12-07 11:21:46.674348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.688190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.688210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.701851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.701872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.713183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.713203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.727114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.727134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.740321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.740340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.754254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.754274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.767901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.767920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.781642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.781662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.795429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.795448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.809373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.809393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.823459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.823479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.834945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.834964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.848613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.848640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.862793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.862812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.874179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.874199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.888120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.888139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.901755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.901774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.915509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.915528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.929092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.929111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.943302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.943321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.954582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.954602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 17231.50 IOPS, 134.62 MiB/s [2024-12-07T10:21:47.004Z] [2024-12-07 11:21:46.968541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.650 [2024-12-07 11:21:46.968560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.650 [2024-12-07 11:21:46.982738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.651 [2024-12-07 11:21:46.982757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.651 [2024-12-07 11:21:46.998603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.651 [2024-12-07 11:21:46.998622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.012811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.912 [2024-12-07 11:21:47.012830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.024038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.912 [2024-12-07 11:21:47.024057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.037693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.912 [2024-12-07 11:21:47.037713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.051187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.912 [2024-12-07 11:21:47.051206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.064661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.912 [2024-12-07 11:21:47.064680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.078275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.912 [2024-12-07 11:21:47.078295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.091736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.912 [2024-12-07 11:21:47.091756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.104643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.912 [2024-12-07 11:21:47.104662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.118314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.912 [2024-12-07 11:21:47.118334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.131924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.912 [2024-12-07 11:21:47.131943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.145673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.912 [2024-12-07 11:21:47.145692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.159949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.912 [2024-12-07 11:21:47.159973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.174936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.912 [2024-12-07 11:21:47.174956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.912 [2024-12-07 11:21:47.188510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.913 [2024-12-07 11:21:47.188528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.913 [2024-12-07 11:21:47.202317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.913 [2024-12-07 11:21:47.202336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.913 [2024-12-07 11:21:47.216032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.913 [2024-12-07 11:21:47.216052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.913 [2024-12-07 11:21:47.229761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.913 [2024-12-07 11:21:47.229781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.913 [2024-12-07 11:21:47.243561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.913 [2024-12-07 11:21:47.243580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.913 [2024-12-07 11:21:47.257441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.913 [2024-12-07 11:21:47.257461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.271301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.271321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.284961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.284981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.298375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.298395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.312642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.312661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.328238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.328258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.342129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.342149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.356290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.356309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.371811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.371831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.385805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.385824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.399719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.399740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.413528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.413547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.427689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.427712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.439289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.439308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.453203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.453222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.466140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.466160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.480232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.480251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.494272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.494292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.507710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.507729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.174 [2024-12-07 11:21:47.521522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.174 [2024-12-07 11:21:47.521542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.535046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.535065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.548845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.548865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.562211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.562230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.576200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.576220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.588166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.588185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.601661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.601680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.615444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.615463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.626776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.626795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.641126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.641145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.654603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.654623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.668642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.668662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.679600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.679624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.693571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.693590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.707357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.707376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.721630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.721655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.732127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.732147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.745778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.745797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.759583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.759602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.773274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.773292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.436 [2024-12-07 11:21:47.787048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.436 [2024-12-07 11:21:47.787068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.801064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.801084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.812724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.812743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.826656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.826674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.840431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.840450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.853884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.853904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.868204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.868224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.883996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.884021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.897699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.897720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.911459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.911479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.925088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.925107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.938790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.938813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.952518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.952538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 17263.00 IOPS, 134.87 MiB/s [2024-12-07T10:21:48.052Z] [2024-12-07 11:21:47.966293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.966313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.980023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.980044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:47.993752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:47.993772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:48.007349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:48.007369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:48.021055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:48.021075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:48.034418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:48.034438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.698 [2024-12-07 11:21:48.048719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.698 [2024-12-07 11:21:48.048739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.064143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.064163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.078174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.078194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.091955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.091975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.105561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.105582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.119166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.119186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.133306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.133326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.146420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.146440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.159887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.159907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.173693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.173713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.187523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.187543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.201499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.201519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.215527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.215547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.229125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.229146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.242764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.242785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.257219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.257238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.272697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.272717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.287241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.287261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.985 [2024-12-07 11:21:48.303153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.985 [2024-12-07 11:21:48.303174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.246 [2024-12-07 11:21:48.317082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.246 [2024-12-07 11:21:48.317103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.246 [2024-12-07 11:21:48.330552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.246 [2024-12-07 11:21:48.330573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.246 [2024-12-07 11:21:48.344463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.246 [2024-12-07 11:21:48.344484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.246 [2024-12-07 11:21:48.357794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.246 [2024-12-07 11:21:48.357814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.246 [2024-12-07 11:21:48.371510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.246 [2024-12-07 11:21:48.371530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.384961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.384981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.398674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.398695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.412784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.412804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.428172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.428192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.442420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.442440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.456485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.456505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.468404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.468423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.482273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.482292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.496100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.496120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.508480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.508500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.522676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.522696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.533164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.533183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.547243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.547263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.561291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.561311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.574825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.574845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.247 [2024-12-07 11:21:48.588246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.247 [2024-12-07 11:21:48.588266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.602312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.602338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.615973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.615993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.629665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.629684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.643763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.643782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.657639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.657658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.671897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.671917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.687031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.687051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.701074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.701093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.715070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.715089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.730415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.730434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.744363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.744383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.758137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.758156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.771704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.771723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.785680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.785699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.799991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.800016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.811974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.811993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.825583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.825603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.839025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.839044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.508 [2024-12-07 11:21:48.852977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.508 [2024-12-07 11:21:48.852996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.769 [2024-12-07 11:21:48.867092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.769 [2024-12-07 11:21:48.867113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.769 [2024-12-07 11:21:48.880678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.769 [2024-12-07 11:21:48.880698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.769 [2024-12-07 11:21:48.894420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:48.894440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:48.908274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:48.908293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:48.922024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:48.922043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:48.935857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:48.935876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:48.949830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:48.949849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 17287.75 IOPS, 135.06 MiB/s [2024-12-07T10:21:49.124Z] [2024-12-07 11:21:48.960920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:48.960940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:48.974906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:48.974931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:48.988315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:48.988334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:49.002341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:49.002360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:49.016023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:49.016043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:49.030031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:49.030051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:49.043378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:49.043397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:49.057017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:49.057036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:49.070709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:49.070729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:49.084234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:49.084253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:49.098007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:49.098032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.770 [2024-12-07 11:21:49.112262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.770 [2024-12-07 11:21:49.112281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.124171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.124190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.137872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.137891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.151161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.151180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.165813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.165832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.177099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.177118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.191538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.191558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.205319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.205337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.218803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.218822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.232257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.232280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.245620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.245639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.259679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.259698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.270849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.270868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.285239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.285258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.298828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.298848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.312703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.031 [2024-12-07 11:21:49.312723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.031 [2024-12-07 11:21:49.326181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.032 [2024-12-07 11:21:49.326200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.032 [2024-12-07 11:21:49.340311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.032 [2024-12-07 11:21:49.340330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.032 [2024-12-07 11:21:49.356159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.032 [2024-12-07 11:21:49.356179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.032 [2024-12-07 11:21:49.369904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.032 [2024-12-07 11:21:49.369923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.384002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.384029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.395878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.395898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.409525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.409545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.424029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.424047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.439513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.439533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.453170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.453190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.467080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.467099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.480796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.480821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.494505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.494528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.507956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.507975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.521501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.521520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.535329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.535348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.548747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.548766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.562473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.562493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.576409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.576429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.588411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.588431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.602287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.602306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.616122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.616141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.629676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.629697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.292 [2024-12-07 11:21:49.643296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.292 [2024-12-07 11:21:49.643316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.553 [2024-12-07 11:21:49.657221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.553 [2024-12-07 11:21:49.657241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.553 [2024-12-07 11:21:49.670630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.553 [2024-12-07 11:21:49.670650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.553 [2024-12-07 11:21:49.683963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.553 [2024-12-07 11:21:49.683984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.553 [2024-12-07 11:21:49.697241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.553 [2024-12-07 11:21:49.697260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.553 [2024-12-07 11:21:49.710726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.553 [2024-12-07 11:21:49.710746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.553 [2024-12-07 11:21:49.724401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.553 [2024-12-07 11:21:49.724421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.553 [2024-12-07 11:21:49.738068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.553 [2024-12-07 11:21:49.738089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.553 [2024-12-07 11:21:49.752161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.553 [2024-12-07 11:21:49.752181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.553 [2024-12-07 11:21:49.765109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.553 [2024-12-07 11:21:49.765129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.553 [2024-12-07 11:21:49.779002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.553 [2024-12-07 11:21:49.779031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.554 [2024-12-07 11:21:49.792949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.554 [2024-12-07 11:21:49.792970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.554 [2024-12-07 11:21:49.804281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.554 [2024-12-07 11:21:49.804301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.554 [2024-12-07 11:21:49.818651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.554 [2024-12-07 11:21:49.818671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.554 [2024-12-07 11:21:49.831984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.554 [2024-12-07 11:21:49.832003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.554 [2024-12-07 11:21:49.845589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.554 [2024-12-07 11:21:49.845609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.554 [2024-12-07 11:21:49.859160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.554 [2024-12-07 11:21:49.859180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.554 [2024-12-07 11:21:49.872697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.554 [2024-12-07 11:21:49.872717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.554 [2024-12-07 11:21:49.886676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.554 [2024-12-07 11:21:49.886697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.554 [2024-12-07 11:21:49.900685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.554 [2024-12-07 11:21:49.900704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.815 [2024-12-07 11:21:49.914767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.815 [2024-12-07 11:21:49.914787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.815 [2024-12-07 11:21:49.925769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.815 [2024-12-07 11:21:49.925789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.815 [2024-12-07 11:21:49.939875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.815 [2024-12-07 11:21:49.939895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.815 [2024-12-07 11:21:49.953680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.815 [2024-12-07 11:21:49.953700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.815 17297.00 IOPS, 135.13 MiB/s [2024-12-07T10:21:50.169Z] [2024-12-07 11:21:49.966091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.815 [2024-12-07 11:21:49.966111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.815 00:09:50.815 Latency(us) 00:09:50.815 [2024-12-07T10:21:50.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.815 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:50.815 Nvme1n1 : 5.01 17295.57 135.12 0.00 0.00 7392.31 3058.35 16165.55 00:09:50.815 [2024-12-07T10:21:50.169Z] =================================================================================================================== 00:09:50.815 [2024-12-07T10:21:50.169Z] Total : 17295.57 135.12 0.00 0.00 7392.31 3058.35 16165.55 00:09:50.815 [2024-12-07 11:21:49.976005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.815 [2024-12-07 11:21:49.976038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.815 [2024-12-07 11:21:49.988044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.815 [2024-12-07 11:21:49.988062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.815 [2024-12-07 11:21:50.000050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.815 [2024-12-07 11:21:50.000068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.815 [2024-12-07 11:21:50.012128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.815 [2024-12-07 11:21:50.012152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.815 [2024-12-07 11:21:50.024115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.815 [2024-12-07 11:21:50.024133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.815 [2024-12-07 11:21:50.036164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.815 [2024-12-07 11:21:50.036181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.816 [2024-12-07 11:21:50.048201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.816 [2024-12-07 11:21:50.048218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.816 [2024-12-07 11:21:50.060206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.816 [2024-12-07 11:21:50.060223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.816 [2024-12-07 11:21:50.072258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.816 [2024-12-07 11:21:50.072276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.816 [2024-12-07 11:21:50.084286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.816 [2024-12-07 11:21:50.084304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.816 [2024-12-07 11:21:50.096300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.816 [2024-12-07 11:21:50.096316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.816 [2024-12-07 11:21:50.108341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.816 [2024-12-07 11:21:50.108358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.816 [2024-12-07 11:21:50.120362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.816 [2024-12-07 11:21:50.120379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.816 [2024-12-07 11:21:50.132418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.816 [2024-12-07 11:21:50.132435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.816 [2024-12-07 11:21:50.144437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.816 [2024-12-07 11:21:50.144454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.816 [2024-12-07 11:21:50.156457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.816 [2024-12-07 11:21:50.156474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.168499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.168516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.180539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.180562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.192555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.192573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.204602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.204620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.216615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.216632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.228660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.228677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.240702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.240720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.252709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.252727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.264769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.264786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.276783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.276800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.288799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.288815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.300842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.300863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.312874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.312890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.324915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.324931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.336938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.336954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.348960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.348977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.361005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.361027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.373037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.373054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.385056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.385072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.397103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.397120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.409158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.409178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.421159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.421176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.433200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.433217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.445215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.445232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.110 [2024-12-07 11:21:50.457260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.110 [2024-12-07 11:21:50.457277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.370 [2024-12-07 11:21:50.469296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.370 [2024-12-07 11:21:50.469312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.370 [2024-12-07 11:21:50.481308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.370 [2024-12-07 11:21:50.481325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.370 [2024-12-07 11:21:50.493357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.370 [2024-12-07 11:21:50.493373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.370 [2024-12-07 11:21:50.505370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.370 [2024-12-07 11:21:50.505386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.370 [2024-12-07 11:21:50.517413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.370 [2024-12-07 11:21:50.517430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.370 [2024-12-07 11:21:50.529446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.370 [2024-12-07 11:21:50.529463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.370 [2024-12-07 11:21:50.541462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.370 [2024-12-07 11:21:50.541478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.370 [2024-12-07 11:21:50.553518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.370 [2024-12-07 11:21:50.553535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.370 [2024-12-07 11:21:50.565544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.370 [2024-12-07 11:21:50.565561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.370 [2024-12-07 11:21:50.577575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.370 [2024-12-07 11:21:50.577592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2341558) - No such process 00:09:51.370 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2341558 00:09:51.370 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.370 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.370 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.370 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.371 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:51.371 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.371 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.371 delay0 00:09:51.371 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.371 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:51.371 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.371 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.371 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.371 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:51.631 [2024-12-07 11:21:50.725294] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:59.770 Initializing NVMe Controllers 00:09:59.770 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:59.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:59.771 Initialization complete. Launching workers. 00:09:59.771 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 239, failed: 27568 00:09:59.771 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 27680, failed to submit 127 00:09:59.771 success 27612, unsuccessful 68, failed 0 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.771 rmmod nvme_tcp 00:09:59.771 rmmod nvme_fabrics 00:09:59.771 rmmod nvme_keyring 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2339183 ']' 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2339183 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2339183 ']' 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2339183 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2339183 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2339183' 00:09:59.771 killing process with pid 2339183 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2339183 00:09:59.771 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2339183 00:09:59.771 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.771 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.771 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.771 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:59.771 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:59.771 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.771 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.771 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.771 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.771 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.771 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.771 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.686 00:10:01.686 real 0m36.735s 00:10:01.686 user 0m50.068s 00:10:01.686 sys 0m11.356s 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.686 ************************************ 00:10:01.686 END TEST nvmf_zcopy 00:10:01.686 ************************************ 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.686 ************************************ 00:10:01.686 START TEST nvmf_nmic 00:10:01.686 ************************************ 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:01.686 * Looking for test storage... 00:10:01.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:01.686 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:01.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.687 --rc genhtml_branch_coverage=1 00:10:01.687 --rc genhtml_function_coverage=1 00:10:01.687 --rc genhtml_legend=1 00:10:01.687 --rc geninfo_all_blocks=1 00:10:01.687 --rc geninfo_unexecuted_blocks=1 00:10:01.687 00:10:01.687 ' 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:01.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.687 --rc genhtml_branch_coverage=1 00:10:01.687 --rc genhtml_function_coverage=1 00:10:01.687 --rc genhtml_legend=1 00:10:01.687 --rc geninfo_all_blocks=1 00:10:01.687 --rc geninfo_unexecuted_blocks=1 00:10:01.687 00:10:01.687 ' 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:01.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.687 --rc genhtml_branch_coverage=1 00:10:01.687 --rc genhtml_function_coverage=1 00:10:01.687 --rc genhtml_legend=1 00:10:01.687 --rc geninfo_all_blocks=1 00:10:01.687 --rc geninfo_unexecuted_blocks=1 00:10:01.687 00:10:01.687 ' 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:01.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.687 --rc genhtml_branch_coverage=1 00:10:01.687 --rc genhtml_function_coverage=1 00:10:01.687 --rc genhtml_legend=1 00:10:01.687 --rc geninfo_all_blocks=1 00:10:01.687 --rc geninfo_unexecuted_blocks=1 00:10:01.687 00:10:01.687 ' 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:01.687 11:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:09.824 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:09.824 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:09.824 Found net devices under 0000:31:00.0: cvl_0_0 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:09.824 Found net devices under 0000:31:00.1: cvl_0_1 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:09.824 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:09.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:10:09.824 00:10:09.824 --- 10.0.0.2 ping statistics --- 00:10:09.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.824 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:10:09.825 00:10:09.825 --- 10.0.0.1 ping statistics --- 00:10:09.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.825 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2348650 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2348650 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2348650 ']' 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.825 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:09.825 [2024-12-07 11:22:08.629375] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:09.825 [2024-12-07 11:22:08.629509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.825 [2024-12-07 11:22:08.780790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.825 [2024-12-07 11:22:08.884443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.825 [2024-12-07 11:22:08.884486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.825 [2024-12-07 11:22:08.884498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.825 [2024-12-07 11:22:08.884510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.825 [2024-12-07 11:22:08.884520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.825 [2024-12-07 11:22:08.886757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.825 [2024-12-07 11:22:08.886841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.825 [2024-12-07 11:22:08.886956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.825 [2024-12-07 11:22:08.886979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.086 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.086 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:10.086 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:10.086 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:10.086 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:10.086 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.086 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:10.086 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.086 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:10.347 [2024-12-07 11:22:09.442252] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:10.347 Malloc0 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:10.347 [2024-12-07 11:22:09.549582] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:10.347 test case1: single bdev can't be used in multiple subsystems 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.347 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:10.347 [2024-12-07 11:22:09.585446] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:10.347 [2024-12-07 11:22:09.585482] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:10.347 [2024-12-07 11:22:09.585495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.347 request: 00:10:10.347 { 00:10:10.347 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:10.347 "namespace": { 00:10:10.348 "bdev_name": "Malloc0", 00:10:10.348 "no_auto_visible": false, 00:10:10.348 "hide_metadata": false 00:10:10.348 }, 00:10:10.348 "method": "nvmf_subsystem_add_ns", 00:10:10.348 "req_id": 1 00:10:10.348 } 00:10:10.348 Got JSON-RPC error response 00:10:10.348 response: 00:10:10.348 { 00:10:10.348 "code": -32602, 00:10:10.348 "message": "Invalid parameters" 00:10:10.348 } 00:10:10.348 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:10.348 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:10.348 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:10.348 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:10.348 Adding namespace failed - expected result. 00:10:10.348 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:10.348 test case2: host connect to nvmf target in multiple paths 00:10:10.348 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:10.348 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.348 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:10.348 [2024-12-07 11:22:09.597612] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:10.348 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.348 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.442 11:22:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:13.384 11:22:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:13.384 11:22:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:13.384 11:22:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.385 11:22:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:13.385 11:22:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:15.929 11:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:15.929 11:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:15.929 11:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.929 11:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:15.929 11:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.929 11:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:15.929 11:22:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:15.929 [global] 00:10:15.929 thread=1 00:10:15.929 invalidate=1 00:10:15.929 rw=write 00:10:15.929 time_based=1 00:10:15.929 runtime=1 00:10:15.929 ioengine=libaio 00:10:15.929 direct=1 00:10:15.929 bs=4096 00:10:15.929 iodepth=1 00:10:15.929 norandommap=0 00:10:15.929 numjobs=1 00:10:15.929 00:10:15.929 verify_dump=1 00:10:15.929 verify_backlog=512 00:10:15.929 verify_state_save=0 00:10:15.929 do_verify=1 00:10:15.929 verify=crc32c-intel 00:10:15.929 [job0] 00:10:15.929 filename=/dev/nvme0n1 00:10:15.929 Could not set queue depth (nvme0n1) 00:10:15.929 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.929 fio-3.35 00:10:15.929 Starting 1 thread 00:10:17.309 00:10:17.309 job0: (groupid=0, jobs=1): err= 0: pid=2350195: Sat Dec 7 11:22:16 2024 00:10:17.309 read: IOPS=686, BW=2745KiB/s (2811kB/s)(2748KiB/1001msec) 00:10:17.309 slat (nsec): min=7036, max=56990, avg=24061.11, stdev=8419.57 00:10:17.309 clat (usec): min=411, max=993, avg=775.79, stdev=93.90 00:10:17.309 lat (usec): min=419, max=1020, avg=799.85, stdev=96.79 00:10:17.309 clat percentiles (usec): 00:10:17.309 | 1.00th=[ 515], 5.00th=[ 603], 10.00th=[ 652], 20.00th=[ 701], 00:10:17.309 | 30.00th=[ 742], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 807], 00:10:17.309 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 889], 95.00th=[ 906], 00:10:17.309 | 99.00th=[ 930], 99.50th=[ 938], 99.90th=[ 996], 99.95th=[ 996], 00:10:17.309 | 99.99th=[ 996] 00:10:17.309 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:17.309 slat (usec): min=3, max=1612, avg=26.54, stdev=63.79 00:10:17.309 clat (usec): min=191, max=682, avg=402.73, stdev=78.07 00:10:17.309 lat (usec): min=201, max=1957, avg=429.27, stdev=104.82 00:10:17.309 clat percentiles (usec): 00:10:17.309 | 1.00th=[ 231], 5.00th=[ 277], 10.00th=[ 310], 20.00th=[ 334], 00:10:17.309 | 30.00th=[ 351], 40.00th=[ 367], 50.00th=[ 404], 60.00th=[ 437], 00:10:17.309 | 70.00th=[ 461], 80.00th=[ 478], 90.00th=[ 498], 95.00th=[ 510], 00:10:17.309 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 676], 99.95th=[ 685], 00:10:17.309 | 99.99th=[ 685] 00:10:17.309 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:17.309 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:17.309 lat (usec) : 250=1.34%, 500=53.59%, 750=18.12%, 1000=26.94% 00:10:17.309 cpu : usr=2.10%, sys=4.30%, ctx=1715, majf=0, minf=1 00:10:17.309 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.309 issued rwts: total=687,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.309 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.309 00:10:17.309 Run status group 0 (all jobs): 00:10:17.309 READ: bw=2745KiB/s (2811kB/s), 2745KiB/s-2745KiB/s (2811kB/s-2811kB/s), io=2748KiB (2814kB), run=1001-1001msec 00:10:17.309 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:10:17.309 00:10:17.309 Disk stats (read/write): 00:10:17.309 nvme0n1: ios=650/1024, merge=0/0, ticks=593/393, in_queue=986, util=98.80% 00:10:17.309 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.569 rmmod nvme_tcp 00:10:17.569 rmmod nvme_fabrics 00:10:17.569 rmmod nvme_keyring 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2348650 ']' 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2348650 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2348650 ']' 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2348650 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2348650 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2348650' 00:10:17.569 killing process with pid 2348650 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2348650 00:10:17.569 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2348650 00:10:18.511 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:18.511 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:18.511 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:18.511 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:18.511 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:18.511 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:18.511 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:18.511 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.511 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:18.511 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.511 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.511 11:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.059 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:21.059 00:10:21.059 real 0m19.080s 00:10:21.059 user 0m48.928s 00:10:21.059 sys 0m6.869s 00:10:21.059 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.059 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.059 ************************************ 00:10:21.059 END TEST nvmf_nmic 00:10:21.059 ************************************ 00:10:21.059 11:22:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:21.059 11:22:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:21.059 11:22:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.059 11:22:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.059 ************************************ 00:10:21.059 START TEST nvmf_fio_target 00:10:21.059 ************************************ 00:10:21.059 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:21.059 * Looking for test storage... 00:10:21.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.059 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:21.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.059 --rc genhtml_branch_coverage=1 00:10:21.059 --rc genhtml_function_coverage=1 00:10:21.059 --rc genhtml_legend=1 00:10:21.059 --rc geninfo_all_blocks=1 00:10:21.060 --rc geninfo_unexecuted_blocks=1 00:10:21.060 00:10:21.060 ' 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:21.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.060 --rc genhtml_branch_coverage=1 00:10:21.060 --rc genhtml_function_coverage=1 00:10:21.060 --rc genhtml_legend=1 00:10:21.060 --rc geninfo_all_blocks=1 00:10:21.060 --rc geninfo_unexecuted_blocks=1 00:10:21.060 00:10:21.060 ' 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:21.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.060 --rc genhtml_branch_coverage=1 00:10:21.060 --rc genhtml_function_coverage=1 00:10:21.060 --rc genhtml_legend=1 00:10:21.060 --rc geninfo_all_blocks=1 00:10:21.060 --rc geninfo_unexecuted_blocks=1 00:10:21.060 00:10:21.060 ' 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:21.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.060 --rc genhtml_branch_coverage=1 00:10:21.060 --rc genhtml_function_coverage=1 00:10:21.060 --rc genhtml_legend=1 00:10:21.060 --rc geninfo_all_blocks=1 00:10:21.060 --rc geninfo_unexecuted_blocks=1 00:10:21.060 00:10:21.060 ' 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:21.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:21.060 11:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:29.196 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:29.196 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:29.196 Found net devices under 0000:31:00.0: cvl_0_0 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:29.196 Found net devices under 0000:31:00.1: cvl_0_1 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.196 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:10:29.197 00:10:29.197 --- 10.0.0.2 ping statistics --- 00:10:29.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.197 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:10:29.197 00:10:29.197 --- 10.0.0.1 ping statistics --- 00:10:29.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.197 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2354936 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2354936 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2354936 ']' 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.197 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.197 [2024-12-07 11:22:27.594986] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:29.197 [2024-12-07 11:22:27.595135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.197 [2024-12-07 11:22:27.753293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.197 [2024-12-07 11:22:27.857242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.197 [2024-12-07 11:22:27.857284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.197 [2024-12-07 11:22:27.857297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.197 [2024-12-07 11:22:27.857309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.197 [2024-12-07 11:22:27.857318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.197 [2024-12-07 11:22:27.859558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.197 [2024-12-07 11:22:27.859660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.197 [2024-12-07 11:22:27.859775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.197 [2024-12-07 11:22:27.859799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.197 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.197 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:29.197 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:29.197 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:29.197 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.197 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.197 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:29.458 [2024-12-07 11:22:28.564217] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.458 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.718 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:29.718 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.718 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:29.978 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.978 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:29.978 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.238 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:30.238 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:30.499 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.760 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:30.760 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.021 11:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:31.021 11:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.282 11:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:31.282 11:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:31.282 11:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:31.543 11:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:31.543 11:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:31.804 11:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:31.804 11:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:31.804 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.066 [2024-12-07 11:22:31.298772] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.066 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:32.327 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:32.588 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:33.972 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:33.972 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:33.972 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:33.972 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:33.972 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:33.972 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:35.886 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:35.886 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:35.886 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.886 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:35.886 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.886 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:35.886 11:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:36.148 [global] 00:10:36.148 thread=1 00:10:36.148 invalidate=1 00:10:36.148 rw=write 00:10:36.148 time_based=1 00:10:36.148 runtime=1 00:10:36.148 ioengine=libaio 00:10:36.148 direct=1 00:10:36.148 bs=4096 00:10:36.148 iodepth=1 00:10:36.148 norandommap=0 00:10:36.148 numjobs=1 00:10:36.148 00:10:36.148 verify_dump=1 00:10:36.148 verify_backlog=512 00:10:36.148 verify_state_save=0 00:10:36.148 do_verify=1 00:10:36.148 verify=crc32c-intel 00:10:36.148 [job0] 00:10:36.148 filename=/dev/nvme0n1 00:10:36.148 [job1] 00:10:36.148 filename=/dev/nvme0n2 00:10:36.148 [job2] 00:10:36.148 filename=/dev/nvme0n3 00:10:36.148 [job3] 00:10:36.148 filename=/dev/nvme0n4 00:10:36.148 Could not set queue depth (nvme0n1) 00:10:36.148 Could not set queue depth (nvme0n2) 00:10:36.148 Could not set queue depth (nvme0n3) 00:10:36.148 Could not set queue depth (nvme0n4) 00:10:36.407 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.407 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.407 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.407 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.407 fio-3.35 00:10:36.407 Starting 4 threads 00:10:37.788 00:10:37.788 job0: (groupid=0, jobs=1): err= 0: pid=2356856: Sat Dec 7 11:22:36 2024 00:10:37.788 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:37.788 slat (nsec): min=7765, max=44895, avg=26417.78, stdev=2711.97 00:10:37.788 clat (usec): min=520, max=1204, avg=974.30, stdev=86.80 00:10:37.788 lat (usec): min=547, max=1231, avg=1000.71, stdev=86.84 00:10:37.788 clat percentiles (usec): 00:10:37.788 | 1.00th=[ 717], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 914], 00:10:37.788 | 30.00th=[ 938], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 996], 00:10:37.788 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:10:37.788 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1205], 99.95th=[ 1205], 00:10:37.788 | 99.99th=[ 1205] 00:10:37.788 write: IOPS=760, BW=3041KiB/s (3114kB/s)(3044KiB/1001msec); 0 zone resets 00:10:37.788 slat (nsec): min=9987, max=64230, avg=31658.61, stdev=9034.54 00:10:37.788 clat (usec): min=212, max=858, avg=596.44, stdev=111.52 00:10:37.788 lat (usec): min=222, max=893, avg=628.10, stdev=115.15 00:10:37.788 clat percentiles (usec): 00:10:37.788 | 1.00th=[ 285], 5.00th=[ 379], 10.00th=[ 461], 20.00th=[ 498], 00:10:37.788 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:10:37.788 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 758], 00:10:37.788 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 857], 99.95th=[ 857], 00:10:37.788 | 99.99th=[ 857] 00:10:37.788 bw ( KiB/s): min= 4096, max= 4096, per=31.57%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.788 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.788 lat (usec) : 250=0.08%, 500=12.18%, 750=44.30%, 1000=27.49% 00:10:37.788 lat (msec) : 2=15.95% 00:10:37.788 cpu : usr=1.90%, sys=3.90%, ctx=1274, majf=0, minf=1 00:10:37.788 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.788 issued rwts: total=512,761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.788 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.788 job1: (groupid=0, jobs=1): err= 0: pid=2356857: Sat Dec 7 11:22:36 2024 00:10:37.788 read: IOPS=541, BW=2166KiB/s (2218kB/s)(2168KiB/1001msec) 00:10:37.788 slat (nsec): min=6781, max=45896, avg=25101.09, stdev=5967.94 00:10:37.788 clat (usec): min=260, max=1171, avg=811.71, stdev=133.48 00:10:37.788 lat (usec): min=287, max=1197, avg=836.81, stdev=134.12 00:10:37.788 clat percentiles (usec): 00:10:37.788 | 1.00th=[ 490], 5.00th=[ 594], 10.00th=[ 644], 20.00th=[ 693], 00:10:37.788 | 30.00th=[ 725], 40.00th=[ 791], 50.00th=[ 824], 60.00th=[ 857], 00:10:37.788 | 70.00th=[ 889], 80.00th=[ 922], 90.00th=[ 963], 95.00th=[ 1004], 00:10:37.788 | 99.00th=[ 1139], 99.50th=[ 1139], 99.90th=[ 1172], 99.95th=[ 1172], 00:10:37.788 | 99.99th=[ 1172] 00:10:37.788 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:37.788 slat (nsec): min=9558, max=53512, avg=32678.32, stdev=8773.48 00:10:37.788 clat (usec): min=113, max=879, avg=489.45, stdev=109.87 00:10:37.788 lat (usec): min=129, max=929, avg=522.13, stdev=113.15 00:10:37.788 clat percentiles (usec): 00:10:37.788 | 1.00th=[ 237], 5.00th=[ 285], 10.00th=[ 359], 20.00th=[ 400], 00:10:37.788 | 30.00th=[ 429], 40.00th=[ 469], 50.00th=[ 498], 60.00th=[ 523], 00:10:37.788 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 627], 95.00th=[ 652], 00:10:37.788 | 99.00th=[ 734], 99.50th=[ 766], 99.90th=[ 848], 99.95th=[ 881], 00:10:37.788 | 99.99th=[ 881] 00:10:37.788 bw ( KiB/s): min= 4096, max= 4096, per=31.57%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.788 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.788 lat (usec) : 250=1.15%, 500=32.44%, 750=42.59%, 1000=21.84% 00:10:37.788 lat (msec) : 2=1.98% 00:10:37.788 cpu : usr=1.90%, sys=5.30%, ctx=1568, majf=0, minf=1 00:10:37.788 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.788 issued rwts: total=542,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.788 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.788 job2: (groupid=0, jobs=1): err= 0: pid=2356858: Sat Dec 7 11:22:36 2024 00:10:37.788 read: IOPS=671, BW=2685KiB/s (2750kB/s)(2688KiB/1001msec) 00:10:37.788 slat (nsec): min=7127, max=46265, avg=26167.57, stdev=4632.32 00:10:37.788 clat (usec): min=290, max=1109, avg=733.59, stdev=164.77 00:10:37.788 lat (usec): min=317, max=1135, avg=759.76, stdev=165.07 00:10:37.788 clat percentiles (usec): 00:10:37.788 | 1.00th=[ 379], 5.00th=[ 453], 10.00th=[ 515], 20.00th=[ 570], 00:10:37.788 | 30.00th=[ 627], 40.00th=[ 693], 50.00th=[ 750], 60.00th=[ 799], 00:10:37.788 | 70.00th=[ 840], 80.00th=[ 889], 90.00th=[ 947], 95.00th=[ 979], 00:10:37.788 | 99.00th=[ 1057], 99.50th=[ 1074], 99.90th=[ 1106], 99.95th=[ 1106], 00:10:37.788 | 99.99th=[ 1106] 00:10:37.788 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:37.788 slat (nsec): min=10068, max=70873, avg=31887.07, stdev=9379.78 00:10:37.788 clat (usec): min=143, max=841, avg=433.64, stdev=151.86 00:10:37.788 lat (usec): min=155, max=876, avg=465.53, stdev=154.15 00:10:37.788 clat percentiles (usec): 00:10:37.788 | 1.00th=[ 155], 5.00th=[ 190], 10.00th=[ 249], 20.00th=[ 297], 00:10:37.788 | 30.00th=[ 326], 40.00th=[ 379], 50.00th=[ 429], 60.00th=[ 465], 00:10:37.788 | 70.00th=[ 515], 80.00th=[ 578], 90.00th=[ 644], 95.00th=[ 701], 00:10:37.788 | 99.00th=[ 783], 99.50th=[ 807], 99.90th=[ 816], 99.95th=[ 840], 00:10:37.788 | 99.99th=[ 840] 00:10:37.788 bw ( KiB/s): min= 4096, max= 4096, per=31.57%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.788 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.788 lat (usec) : 250=6.07%, 500=37.62%, 750=35.08%, 1000=19.87% 00:10:37.788 lat (msec) : 2=1.36% 00:10:37.788 cpu : usr=2.40%, sys=5.20%, ctx=1699, majf=0, minf=1 00:10:37.788 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.788 issued rwts: total=672,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.788 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.788 job3: (groupid=0, jobs=1): err= 0: pid=2356859: Sat Dec 7 11:22:36 2024 00:10:37.788 read: IOPS=18, BW=74.2KiB/s (76.0kB/s)(76.0KiB/1024msec) 00:10:37.789 slat (nsec): min=10102, max=30174, avg=15860.37, stdev=7802.41 00:10:37.789 clat (usec): min=906, max=42066, avg=39115.03, stdev=9262.37 00:10:37.789 lat (usec): min=932, max=42077, avg=39130.90, stdev=9259.97 00:10:37.789 clat percentiles (usec): 00:10:37.789 | 1.00th=[ 906], 5.00th=[ 906], 10.00th=[41157], 20.00th=[41157], 00:10:37.789 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:37.789 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:10:37.789 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:37.789 | 99.99th=[42206] 00:10:37.789 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:10:37.789 slat (usec): min=7, max=34440, avg=97.01, stdev=1520.95 00:10:37.789 clat (usec): min=127, max=938, avg=444.31, stdev=142.81 00:10:37.789 lat (usec): min=140, max=35221, avg=541.32, stdev=1542.99 00:10:37.789 clat percentiles (usec): 00:10:37.789 | 1.00th=[ 135], 5.00th=[ 153], 10.00th=[ 269], 20.00th=[ 314], 00:10:37.789 | 30.00th=[ 367], 40.00th=[ 412], 50.00th=[ 457], 60.00th=[ 482], 00:10:37.789 | 70.00th=[ 515], 80.00th=[ 570], 90.00th=[ 619], 95.00th=[ 660], 00:10:37.789 | 99.00th=[ 783], 99.50th=[ 848], 99.90th=[ 938], 99.95th=[ 938], 00:10:37.789 | 99.99th=[ 938] 00:10:37.789 bw ( KiB/s): min= 4096, max= 4096, per=31.57%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.789 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.789 lat (usec) : 250=6.78%, 500=57.63%, 750=30.89%, 1000=1.32% 00:10:37.789 lat (msec) : 50=3.39% 00:10:37.789 cpu : usr=0.49%, sys=1.56%, ctx=534, majf=0, minf=1 00:10:37.789 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.789 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.789 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.789 00:10:37.789 Run status group 0 (all jobs): 00:10:37.789 READ: bw=6816KiB/s (6980kB/s), 74.2KiB/s-2685KiB/s (76.0kB/s-2750kB/s), io=6980KiB (7148kB), run=1001-1024msec 00:10:37.789 WRITE: bw=12.7MiB/s (13.3MB/s), 2000KiB/s-4092KiB/s (2048kB/s-4190kB/s), io=13.0MiB (13.6MB), run=1001-1024msec 00:10:37.789 00:10:37.789 Disk stats (read/write): 00:10:37.789 nvme0n1: ios=522/512, merge=0/0, ticks=1294/296, in_queue=1590, util=83.47% 00:10:37.789 nvme0n2: ios=534/756, merge=0/0, ticks=1293/345, in_queue=1638, util=87.38% 00:10:37.789 nvme0n3: ios=534/913, merge=0/0, ticks=1257/366, in_queue=1623, util=91.50% 00:10:37.789 nvme0n4: ios=68/512, merge=0/0, ticks=894/208, in_queue=1102, util=96.88% 00:10:37.789 11:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:37.789 [global] 00:10:37.789 thread=1 00:10:37.789 invalidate=1 00:10:37.789 rw=randwrite 00:10:37.789 time_based=1 00:10:37.789 runtime=1 00:10:37.789 ioengine=libaio 00:10:37.789 direct=1 00:10:37.789 bs=4096 00:10:37.789 iodepth=1 00:10:37.789 norandommap=0 00:10:37.789 numjobs=1 00:10:37.789 00:10:37.789 verify_dump=1 00:10:37.789 verify_backlog=512 00:10:37.789 verify_state_save=0 00:10:37.789 do_verify=1 00:10:37.789 verify=crc32c-intel 00:10:37.789 [job0] 00:10:37.789 filename=/dev/nvme0n1 00:10:37.789 [job1] 00:10:37.789 filename=/dev/nvme0n2 00:10:37.789 [job2] 00:10:37.789 filename=/dev/nvme0n3 00:10:37.789 [job3] 00:10:37.789 filename=/dev/nvme0n4 00:10:37.789 Could not set queue depth (nvme0n1) 00:10:37.789 Could not set queue depth (nvme0n2) 00:10:37.789 Could not set queue depth (nvme0n3) 00:10:37.789 Could not set queue depth (nvme0n4) 00:10:38.048 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.048 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.048 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.048 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.048 fio-3.35 00:10:38.048 Starting 4 threads 00:10:39.431 00:10:39.431 job0: (groupid=0, jobs=1): err= 0: pid=2357381: Sat Dec 7 11:22:38 2024 00:10:39.431 read: IOPS=59, BW=238KiB/s (244kB/s)(248KiB/1042msec) 00:10:39.431 slat (nsec): min=7054, max=29123, avg=22901.13, stdev=8566.29 00:10:39.431 clat (usec): min=468, max=42964, avg=11376.43, stdev=18202.39 00:10:39.431 lat (usec): min=496, max=42992, avg=11399.33, stdev=18205.54 00:10:39.431 clat percentiles (usec): 00:10:39.431 | 1.00th=[ 469], 5.00th=[ 570], 10.00th=[ 594], 20.00th=[ 660], 00:10:39.431 | 30.00th=[ 693], 40.00th=[ 734], 50.00th=[ 775], 60.00th=[ 840], 00:10:39.431 | 70.00th=[ 881], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:39.431 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:39.431 | 99.99th=[42730] 00:10:39.431 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:10:39.431 slat (nsec): min=9497, max=54626, avg=30773.50, stdev=10089.09 00:10:39.431 clat (usec): min=330, max=1153, avg=615.40, stdev=111.34 00:10:39.431 lat (usec): min=360, max=1188, avg=646.17, stdev=116.45 00:10:39.431 clat percentiles (usec): 00:10:39.431 | 1.00th=[ 351], 5.00th=[ 412], 10.00th=[ 461], 20.00th=[ 523], 00:10:39.431 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:10:39.431 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 766], 00:10:39.431 | 99.00th=[ 840], 99.50th=[ 963], 99.90th=[ 1156], 99.95th=[ 1156], 00:10:39.431 | 99.99th=[ 1156] 00:10:39.431 bw ( KiB/s): min= 4096, max= 4096, per=39.47%, avg=4096.00, stdev= 0.00, samples=1 00:10:39.431 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:39.431 lat (usec) : 500=15.51%, 750=72.13%, 1000=9.23% 00:10:39.431 lat (msec) : 2=0.35%, 50=2.79% 00:10:39.431 cpu : usr=1.15%, sys=2.02%, ctx=575, majf=0, minf=1 00:10:39.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.431 issued rwts: total=62,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.431 job1: (groupid=0, jobs=1): err= 0: pid=2357382: Sat Dec 7 11:22:38 2024 00:10:39.431 read: IOPS=551, BW=2206KiB/s (2259kB/s)(2208KiB/1001msec) 00:10:39.431 slat (nsec): min=6741, max=60243, avg=25762.25, stdev=7131.75 00:10:39.431 clat (usec): min=418, max=1010, avg=752.97, stdev=118.53 00:10:39.431 lat (usec): min=445, max=1037, avg=778.74, stdev=119.51 00:10:39.431 clat percentiles (usec): 00:10:39.431 | 1.00th=[ 469], 5.00th=[ 545], 10.00th=[ 586], 20.00th=[ 652], 00:10:39.431 | 30.00th=[ 685], 40.00th=[ 725], 50.00th=[ 758], 60.00th=[ 807], 00:10:39.431 | 70.00th=[ 840], 80.00th=[ 865], 90.00th=[ 898], 95.00th=[ 922], 00:10:39.431 | 99.00th=[ 971], 99.50th=[ 1004], 99.90th=[ 1012], 99.95th=[ 1012], 00:10:39.431 | 99.99th=[ 1012] 00:10:39.431 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:39.431 slat (nsec): min=8966, max=81810, avg=32236.09, stdev=9611.81 00:10:39.431 clat (usec): min=162, max=983, avg=512.48, stdev=113.34 00:10:39.431 lat (usec): min=172, max=1032, avg=544.72, stdev=117.84 00:10:39.431 clat percentiles (usec): 00:10:39.431 | 1.00th=[ 227], 5.00th=[ 306], 10.00th=[ 371], 20.00th=[ 412], 00:10:39.431 | 30.00th=[ 461], 40.00th=[ 494], 50.00th=[ 515], 60.00th=[ 553], 00:10:39.431 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 652], 95.00th=[ 676], 00:10:39.431 | 99.00th=[ 725], 99.50th=[ 734], 99.90th=[ 783], 99.95th=[ 988], 00:10:39.431 | 99.99th=[ 988] 00:10:39.431 bw ( KiB/s): min= 4096, max= 4096, per=39.47%, avg=4096.00, stdev= 0.00, samples=1 00:10:39.431 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:39.431 lat (usec) : 250=0.82%, 500=27.79%, 750=52.86%, 1000=18.34% 00:10:39.431 lat (msec) : 2=0.19% 00:10:39.431 cpu : usr=3.70%, sys=5.70%, ctx=1578, majf=0, minf=1 00:10:39.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.431 issued rwts: total=552,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.431 job2: (groupid=0, jobs=1): err= 0: pid=2357383: Sat Dec 7 11:22:38 2024 00:10:39.431 read: IOPS=15, BW=63.8KiB/s (65.3kB/s)(64.0KiB/1003msec) 00:10:39.431 slat (nsec): min=26329, max=27419, avg=26699.81, stdev=267.88 00:10:39.431 clat (usec): min=1017, max=42926, avg=39553.61, stdev=10281.78 00:10:39.431 lat (usec): min=1044, max=42952, avg=39580.31, stdev=10281.80 00:10:39.431 clat percentiles (usec): 00:10:39.431 | 1.00th=[ 1020], 5.00th=[ 1020], 10.00th=[41681], 20.00th=[41681], 00:10:39.431 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:39.431 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:10:39.431 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:39.431 | 99.99th=[42730] 00:10:39.431 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:10:39.431 slat (nsec): min=9963, max=59663, avg=32690.21, stdev=7335.49 00:10:39.431 clat (usec): min=224, max=1066, avg=679.91, stdev=144.18 00:10:39.431 lat (usec): min=235, max=1100, avg=712.60, stdev=145.54 00:10:39.431 clat percentiles (usec): 00:10:39.431 | 1.00th=[ 310], 5.00th=[ 408], 10.00th=[ 478], 20.00th=[ 570], 00:10:39.431 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 693], 60.00th=[ 734], 00:10:39.431 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 848], 95.00th=[ 889], 00:10:39.431 | 99.00th=[ 971], 99.50th=[ 996], 99.90th=[ 1074], 99.95th=[ 1074], 00:10:39.431 | 99.99th=[ 1074] 00:10:39.431 bw ( KiB/s): min= 4096, max= 4096, per=39.47%, avg=4096.00, stdev= 0.00, samples=1 00:10:39.431 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:39.431 lat (usec) : 250=0.19%, 500=10.98%, 750=51.14%, 1000=34.28% 00:10:39.431 lat (msec) : 2=0.57%, 50=2.84% 00:10:39.431 cpu : usr=1.20%, sys=1.30%, ctx=529, majf=0, minf=1 00:10:39.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.431 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.431 job3: (groupid=0, jobs=1): err= 0: pid=2357384: Sat Dec 7 11:22:38 2024 00:10:39.431 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:39.431 slat (nsec): min=25560, max=58499, avg=26590.89, stdev=2733.77 00:10:39.431 clat (usec): min=702, max=1259, avg=1016.85, stdev=86.24 00:10:39.431 lat (usec): min=729, max=1284, avg=1043.44, stdev=86.07 00:10:39.431 clat percentiles (usec): 00:10:39.431 | 1.00th=[ 758], 5.00th=[ 848], 10.00th=[ 898], 20.00th=[ 963], 00:10:39.431 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:10:39.431 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1139], 00:10:39.431 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1254], 99.95th=[ 1254], 00:10:39.431 | 99.99th=[ 1254] 00:10:39.431 write: IOPS=654, BW=2617KiB/s (2680kB/s)(2620KiB/1001msec); 0 zone resets 00:10:39.431 slat (nsec): min=9833, max=55132, avg=31830.48, stdev=7610.49 00:10:39.431 clat (usec): min=197, max=1106, avg=664.48, stdev=141.54 00:10:39.431 lat (usec): min=231, max=1141, avg=696.31, stdev=143.41 00:10:39.431 clat percentiles (usec): 00:10:39.431 | 1.00th=[ 310], 5.00th=[ 383], 10.00th=[ 445], 20.00th=[ 545], 00:10:39.431 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 685], 60.00th=[ 725], 00:10:39.431 | 70.00th=[ 758], 80.00th=[ 783], 90.00th=[ 816], 95.00th=[ 848], 00:10:39.431 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 1106], 99.95th=[ 1106], 00:10:39.431 | 99.99th=[ 1106] 00:10:39.431 bw ( KiB/s): min= 4096, max= 4096, per=39.47%, avg=4096.00, stdev= 0.00, samples=1 00:10:39.431 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:39.431 lat (usec) : 250=0.17%, 500=7.54%, 750=30.25%, 1000=33.25% 00:10:39.431 lat (msec) : 2=28.79% 00:10:39.431 cpu : usr=1.60%, sys=3.70%, ctx=1170, majf=0, minf=1 00:10:39.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.431 issued rwts: total=512,655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.431 00:10:39.431 Run status group 0 (all jobs): 00:10:39.431 READ: bw=4384KiB/s (4489kB/s), 63.8KiB/s-2206KiB/s (65.3kB/s-2259kB/s), io=4568KiB (4678kB), run=1001-1042msec 00:10:39.431 WRITE: bw=10.1MiB/s (10.6MB/s), 1965KiB/s-4092KiB/s (2013kB/s-4190kB/s), io=10.6MiB (11.1MB), run=1001-1042msec 00:10:39.431 00:10:39.431 Disk stats (read/write): 00:10:39.431 nvme0n1: ios=97/512, merge=0/0, ticks=587/261, in_queue=848, util=87.17% 00:10:39.431 nvme0n2: ios=552/781, merge=0/0, ticks=426/278, in_queue=704, util=91.13% 00:10:39.431 nvme0n3: ios=60/512, merge=0/0, ticks=630/329, in_queue=959, util=95.36% 00:10:39.431 nvme0n4: ios=508/512, merge=0/0, ticks=1167/328, in_queue=1495, util=94.12% 00:10:39.432 11:22:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:39.432 [global] 00:10:39.432 thread=1 00:10:39.432 invalidate=1 00:10:39.432 rw=write 00:10:39.432 time_based=1 00:10:39.432 runtime=1 00:10:39.432 ioengine=libaio 00:10:39.432 direct=1 00:10:39.432 bs=4096 00:10:39.432 iodepth=128 00:10:39.432 norandommap=0 00:10:39.432 numjobs=1 00:10:39.432 00:10:39.432 verify_dump=1 00:10:39.432 verify_backlog=512 00:10:39.432 verify_state_save=0 00:10:39.432 do_verify=1 00:10:39.432 verify=crc32c-intel 00:10:39.432 [job0] 00:10:39.432 filename=/dev/nvme0n1 00:10:39.432 [job1] 00:10:39.432 filename=/dev/nvme0n2 00:10:39.432 [job2] 00:10:39.432 filename=/dev/nvme0n3 00:10:39.432 [job3] 00:10:39.432 filename=/dev/nvme0n4 00:10:39.432 Could not set queue depth (nvme0n1) 00:10:39.432 Could not set queue depth (nvme0n2) 00:10:39.432 Could not set queue depth (nvme0n3) 00:10:39.432 Could not set queue depth (nvme0n4) 00:10:39.692 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.692 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.692 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.692 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.692 fio-3.35 00:10:39.692 Starting 4 threads 00:10:41.077 00:10:41.077 job0: (groupid=0, jobs=1): err= 0: pid=2357905: Sat Dec 7 11:22:40 2024 00:10:41.077 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:10:41.077 slat (nsec): min=974, max=17814k, avg=89873.04, stdev=711380.21 00:10:41.077 clat (usec): min=3915, max=40489, avg=11492.46, stdev=4660.74 00:10:41.077 lat (usec): min=3923, max=40519, avg=11582.33, stdev=4707.45 00:10:41.077 clat percentiles (usec): 00:10:41.077 | 1.00th=[ 4424], 5.00th=[ 7242], 10.00th=[ 7701], 20.00th=[ 8586], 00:10:41.077 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10421], 00:10:41.077 | 70.00th=[11600], 80.00th=[13304], 90.00th=[16712], 95.00th=[21365], 00:10:41.077 | 99.00th=[29754], 99.50th=[29754], 99.90th=[29754], 99.95th=[32637], 00:10:41.077 | 99.99th=[40633] 00:10:41.077 write: IOPS=5492, BW=21.5MiB/s (22.5MB/s)(21.6MiB/1009msec); 0 zone resets 00:10:41.077 slat (nsec): min=1699, max=9873.2k, avg=90095.78, stdev=480811.54 00:10:41.077 clat (usec): min=1693, max=80185, avg=12434.76, stdev=10095.70 00:10:41.077 lat (usec): min=1703, max=80189, avg=12524.85, stdev=10151.35 00:10:41.077 clat percentiles (usec): 00:10:41.077 | 1.00th=[ 3425], 5.00th=[ 5211], 10.00th=[ 6194], 20.00th=[ 8586], 00:10:41.077 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:10:41.077 | 70.00th=[10159], 80.00th=[12911], 90.00th=[20841], 95.00th=[27657], 00:10:41.077 | 99.00th=[70779], 99.50th=[78119], 99.90th=[80217], 99.95th=[80217], 00:10:41.077 | 99.99th=[80217] 00:10:41.077 bw ( KiB/s): min=18736, max=24576, per=25.71%, avg=21656.00, stdev=4129.50, samples=2 00:10:41.077 iops : min= 4684, max= 6144, avg=5414.00, stdev=1032.38, samples=2 00:10:41.077 lat (msec) : 2=0.06%, 4=1.06%, 10=52.87%, 20=36.84%, 50=8.14% 00:10:41.077 lat (msec) : 100=1.03% 00:10:41.077 cpu : usr=4.07%, sys=5.56%, ctx=642, majf=0, minf=1 00:10:41.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:41.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.077 issued rwts: total=5120,5542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.077 job1: (groupid=0, jobs=1): err= 0: pid=2357906: Sat Dec 7 11:22:40 2024 00:10:41.077 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:10:41.077 slat (nsec): min=938, max=20335k, avg=114591.37, stdev=795227.98 00:10:41.077 clat (usec): min=3974, max=66318, avg=13899.88, stdev=9910.56 00:10:41.077 lat (usec): min=3980, max=66348, avg=14014.47, stdev=9985.42 00:10:41.077 clat percentiles (usec): 00:10:41.077 | 1.00th=[ 4621], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 8160], 00:10:41.077 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[10028], 60.00th=[11600], 00:10:41.077 | 70.00th=[13173], 80.00th=[17171], 90.00th=[25297], 95.00th=[36963], 00:10:41.077 | 99.00th=[55313], 99.50th=[55313], 99.90th=[61604], 99.95th=[61604], 00:10:41.077 | 99.99th=[66323] 00:10:41.077 write: IOPS=4577, BW=17.9MiB/s (18.8MB/s)(17.9MiB/1002msec); 0 zone resets 00:10:41.077 slat (nsec): min=1586, max=9732.6k, avg=110202.14, stdev=468374.82 00:10:41.077 clat (usec): min=1216, max=40816, avg=15298.65, stdev=6078.45 00:10:41.077 lat (usec): min=1227, max=40823, avg=15408.85, stdev=6119.23 00:10:41.077 clat percentiles (usec): 00:10:41.077 | 1.00th=[ 2835], 5.00th=[ 5407], 10.00th=[ 6194], 20.00th=[ 9372], 00:10:41.077 | 30.00th=[13566], 40.00th=[15270], 50.00th=[15795], 60.00th=[17433], 00:10:41.077 | 70.00th=[18220], 80.00th=[18744], 90.00th=[21627], 95.00th=[24511], 00:10:41.077 | 99.00th=[35390], 99.50th=[36963], 99.90th=[39584], 99.95th=[40633], 00:10:41.077 | 99.99th=[40633] 00:10:41.077 bw ( KiB/s): min=16368, max=19312, per=21.18%, avg=17840.00, stdev=2081.72, samples=2 00:10:41.077 iops : min= 4092, max= 4828, avg=4460.00, stdev=520.43, samples=2 00:10:41.077 lat (msec) : 2=0.13%, 4=1.31%, 10=32.62%, 20=49.90%, 50=14.65% 00:10:41.077 lat (msec) : 100=1.39% 00:10:41.077 cpu : usr=3.10%, sys=4.00%, ctx=557, majf=0, minf=2 00:10:41.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:41.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.077 issued rwts: total=4096,4587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.077 job2: (groupid=0, jobs=1): err= 0: pid=2357907: Sat Dec 7 11:22:40 2024 00:10:41.077 read: IOPS=6041, BW=23.6MiB/s (24.7MB/s)(23.7MiB/1004msec) 00:10:41.077 slat (nsec): min=940, max=8554.8k, avg=71375.83, stdev=484684.55 00:10:41.077 clat (usec): min=2474, max=23121, avg=9926.98, stdev=2791.97 00:10:41.077 lat (usec): min=2728, max=23127, avg=9998.36, stdev=2830.17 00:10:41.077 clat percentiles (usec): 00:10:41.077 | 1.00th=[ 4817], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 8029], 00:10:41.077 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9765], 00:10:41.077 | 70.00th=[10814], 80.00th=[12387], 90.00th=[13960], 95.00th=[15008], 00:10:41.077 | 99.00th=[18220], 99.50th=[18744], 99.90th=[22938], 99.95th=[22938], 00:10:41.077 | 99.99th=[23200] 00:10:41.077 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:10:41.077 slat (nsec): min=1632, max=9885.0k, avg=76665.68, stdev=523741.42 00:10:41.077 clat (usec): min=764, max=34707, avg=10906.72, stdev=5607.29 00:10:41.077 lat (usec): min=1262, max=34718, avg=10983.39, stdev=5649.31 00:10:41.077 clat percentiles (usec): 00:10:41.077 | 1.00th=[ 2376], 5.00th=[ 4555], 10.00th=[ 6783], 20.00th=[ 7898], 00:10:41.077 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 9634], 00:10:41.077 | 70.00th=[11600], 80.00th=[13829], 90.00th=[18482], 95.00th=[23462], 00:10:41.077 | 99.00th=[32113], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:10:41.077 | 99.99th=[34866] 00:10:41.077 bw ( KiB/s): min=24576, max=24576, per=29.17%, avg=24576.00, stdev= 0.00, samples=2 00:10:41.077 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:10:41.077 lat (usec) : 1000=0.01% 00:10:41.077 lat (msec) : 2=0.41%, 4=1.59%, 10=60.83%, 20=32.60%, 50=4.56% 00:10:41.077 cpu : usr=4.09%, sys=5.78%, ctx=645, majf=0, minf=1 00:10:41.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:41.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.077 issued rwts: total=6066,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.077 job3: (groupid=0, jobs=1): err= 0: pid=2357908: Sat Dec 7 11:22:40 2024 00:10:41.077 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:10:41.077 slat (nsec): min=967, max=14559k, avg=100830.69, stdev=670826.85 00:10:41.077 clat (usec): min=4249, max=41129, avg=12836.75, stdev=5520.83 00:10:41.077 lat (usec): min=4254, max=41161, avg=12937.58, stdev=5578.67 00:10:41.077 clat percentiles (usec): 00:10:41.077 | 1.00th=[ 4883], 5.00th=[ 6849], 10.00th=[ 7767], 20.00th=[ 8586], 00:10:41.077 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12256], 60.00th=[13304], 00:10:41.077 | 70.00th=[13829], 80.00th=[14091], 90.00th=[16057], 95.00th=[25297], 00:10:41.077 | 99.00th=[37487], 99.50th=[39060], 99.90th=[39060], 99.95th=[40633], 00:10:41.077 | 99.99th=[41157] 00:10:41.077 write: IOPS=4948, BW=19.3MiB/s (20.3MB/s)(19.5MiB/1010msec); 0 zone resets 00:10:41.077 slat (nsec): min=1651, max=14694k, avg=97146.71, stdev=614053.61 00:10:41.077 clat (usec): min=1238, max=45173, avg=13768.04, stdev=8537.43 00:10:41.077 lat (usec): min=1249, max=45176, avg=13865.18, stdev=8583.36 00:10:41.077 clat percentiles (usec): 00:10:41.077 | 1.00th=[ 3818], 5.00th=[ 5014], 10.00th=[ 6194], 20.00th=[ 7439], 00:10:41.077 | 30.00th=[ 8029], 40.00th=[ 9634], 50.00th=[10814], 60.00th=[13566], 00:10:41.077 | 70.00th=[15401], 80.00th=[18220], 90.00th=[26608], 95.00th=[34341], 00:10:41.077 | 99.00th=[40633], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:10:41.077 | 99.99th=[45351] 00:10:41.077 bw ( KiB/s): min=18488, max=20480, per=23.13%, avg=19484.00, stdev=1408.56, samples=2 00:10:41.077 iops : min= 4622, max= 5120, avg=4871.00, stdev=352.14, samples=2 00:10:41.077 lat (msec) : 2=0.02%, 4=0.82%, 10=35.30%, 20=51.30%, 50=12.55% 00:10:41.077 cpu : usr=4.16%, sys=4.46%, ctx=447, majf=0, minf=2 00:10:41.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:41.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.077 issued rwts: total=4608,4998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.077 00:10:41.077 Run status group 0 (all jobs): 00:10:41.077 READ: bw=76.9MiB/s (80.7MB/s), 16.0MiB/s-23.6MiB/s (16.7MB/s-24.7MB/s), io=77.7MiB (81.5MB), run=1002-1010msec 00:10:41.077 WRITE: bw=82.3MiB/s (86.3MB/s), 17.9MiB/s-23.9MiB/s (18.8MB/s-25.1MB/s), io=83.1MiB (87.1MB), run=1002-1010msec 00:10:41.077 00:10:41.077 Disk stats (read/write): 00:10:41.077 nvme0n1: ios=4508/4608, merge=0/0, ticks=51335/50760, in_queue=102095, util=96.29% 00:10:41.077 nvme0n2: ios=3778/4096, merge=0/0, ticks=39788/62247, in_queue=102035, util=87.97% 00:10:41.077 nvme0n3: ios=5139/5362, merge=0/0, ticks=35980/37230, in_queue=73210, util=99.47% 00:10:41.077 nvme0n4: ios=3798/4096, merge=0/0, ticks=24625/27158, in_queue=51783, util=98.18% 00:10:41.077 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:41.077 [global] 00:10:41.077 thread=1 00:10:41.077 invalidate=1 00:10:41.077 rw=randwrite 00:10:41.077 time_based=1 00:10:41.077 runtime=1 00:10:41.077 ioengine=libaio 00:10:41.077 direct=1 00:10:41.077 bs=4096 00:10:41.077 iodepth=128 00:10:41.077 norandommap=0 00:10:41.077 numjobs=1 00:10:41.077 00:10:41.077 verify_dump=1 00:10:41.077 verify_backlog=512 00:10:41.077 verify_state_save=0 00:10:41.077 do_verify=1 00:10:41.077 verify=crc32c-intel 00:10:41.077 [job0] 00:10:41.077 filename=/dev/nvme0n1 00:10:41.077 [job1] 00:10:41.077 filename=/dev/nvme0n2 00:10:41.077 [job2] 00:10:41.077 filename=/dev/nvme0n3 00:10:41.077 [job3] 00:10:41.077 filename=/dev/nvme0n4 00:10:41.077 Could not set queue depth (nvme0n1) 00:10:41.077 Could not set queue depth (nvme0n2) 00:10:41.077 Could not set queue depth (nvme0n3) 00:10:41.077 Could not set queue depth (nvme0n4) 00:10:41.647 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.647 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.647 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.648 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.648 fio-3.35 00:10:41.648 Starting 4 threads 00:10:42.590 00:10:42.590 job0: (groupid=0, jobs=1): err= 0: pid=2358430: Sat Dec 7 11:22:41 2024 00:10:42.590 read: IOPS=8791, BW=34.3MiB/s (36.0MB/s)(34.5MiB/1005msec) 00:10:42.590 slat (nsec): min=953, max=6357.7k, avg=60283.92, stdev=447582.54 00:10:42.590 clat (usec): min=2570, max=13378, avg=7683.72, stdev=1825.35 00:10:42.590 lat (usec): min=2577, max=13409, avg=7744.00, stdev=1852.52 00:10:42.590 clat percentiles (usec): 00:10:42.590 | 1.00th=[ 3720], 5.00th=[ 5538], 10.00th=[ 5800], 20.00th=[ 6521], 00:10:42.590 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7308], 00:10:42.590 | 70.00th=[ 7963], 80.00th=[ 8979], 90.00th=[10421], 95.00th=[11731], 00:10:42.590 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13173], 99.95th=[13173], 00:10:42.590 | 99.99th=[13435] 00:10:42.590 write: IOPS=9170, BW=35.8MiB/s (37.6MB/s)(36.0MiB/1005msec); 0 zone resets 00:10:42.590 slat (nsec): min=1625, max=5359.7k, avg=45474.06, stdev=232858.95 00:10:42.590 clat (usec): min=1002, max=13245, avg=6469.91, stdev=1415.05 00:10:42.590 lat (usec): min=1005, max=13271, avg=6515.38, stdev=1426.94 00:10:42.590 clat percentiles (usec): 00:10:42.590 | 1.00th=[ 2409], 5.00th=[ 3556], 10.00th=[ 4293], 20.00th=[ 5538], 00:10:42.590 | 30.00th=[ 6456], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7111], 00:10:42.590 | 70.00th=[ 7242], 80.00th=[ 7308], 90.00th=[ 7373], 95.00th=[ 7504], 00:10:42.590 | 99.00th=[ 9634], 99.50th=[ 9896], 99.90th=[13042], 99.95th=[13042], 00:10:42.590 | 99.99th=[13304] 00:10:42.590 bw ( KiB/s): min=36864, max=36864, per=38.71%, avg=36864.00, stdev= 0.00, samples=2 00:10:42.590 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:10:42.590 lat (msec) : 2=0.19%, 4=4.18%, 10=89.41%, 20=6.22% 00:10:42.590 cpu : usr=5.88%, sys=9.96%, ctx=974, majf=0, minf=1 00:10:42.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:42.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.590 issued rwts: total=8835,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.590 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.590 job1: (groupid=0, jobs=1): err= 0: pid=2358431: Sat Dec 7 11:22:41 2024 00:10:42.590 read: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec) 00:10:42.590 slat (nsec): min=960, max=9169.8k, avg=87257.54, stdev=677416.44 00:10:42.590 clat (usec): min=3608, max=18872, avg=10619.70, stdev=2538.93 00:10:42.590 lat (usec): min=3618, max=18887, avg=10706.95, stdev=2585.20 00:10:42.590 clat percentiles (usec): 00:10:42.590 | 1.00th=[ 4424], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 9110], 00:10:42.590 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:10:42.590 | 70.00th=[10552], 80.00th=[12518], 90.00th=[14877], 95.00th=[16188], 00:10:42.590 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18482], 99.95th=[18482], 00:10:42.590 | 99.99th=[18744] 00:10:42.590 write: IOPS=6572, BW=25.7MiB/s (26.9MB/s)(25.9MiB/1010msec); 0 zone resets 00:10:42.590 slat (nsec): min=1648, max=2391.1k, avg=65501.56, stdev=201423.59 00:10:42.590 clat (usec): min=1120, max=23065, avg=9460.90, stdev=2539.62 00:10:42.590 lat (usec): min=1130, max=23074, avg=9526.40, stdev=2553.57 00:10:42.590 clat percentiles (usec): 00:10:42.590 | 1.00th=[ 3326], 5.00th=[ 4817], 10.00th=[ 6063], 20.00th=[ 8455], 00:10:42.590 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[ 9896], 60.00th=[10028], 00:10:42.590 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10290], 95.00th=[11207], 00:10:42.590 | 99.00th=[20317], 99.50th=[21627], 99.90th=[22938], 99.95th=[22938], 00:10:42.590 | 99.99th=[22938] 00:10:42.590 bw ( KiB/s): min=25200, max=26888, per=27.35%, avg=26044.00, stdev=1193.60, samples=2 00:10:42.590 iops : min= 6300, max= 6722, avg=6511.00, stdev=298.40, samples=2 00:10:42.590 lat (msec) : 2=0.02%, 4=1.35%, 10=54.85%, 20=43.23%, 50=0.55% 00:10:42.590 cpu : usr=4.46%, sys=5.45%, ctx=892, majf=0, minf=2 00:10:42.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:42.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.590 issued rwts: total=6144,6638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.590 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.590 job2: (groupid=0, jobs=1): err= 0: pid=2358432: Sat Dec 7 11:22:41 2024 00:10:42.590 read: IOPS=1999, BW=7996KiB/s (8188kB/s)(8068KiB/1009msec) 00:10:42.590 slat (usec): min=4, max=26004, avg=263.24, stdev=1842.45 00:10:42.590 clat (usec): min=841, max=85751, avg=38548.38, stdev=24610.15 00:10:42.590 lat (usec): min=9941, max=85756, avg=38811.62, stdev=24704.86 00:10:42.590 clat percentiles (usec): 00:10:42.590 | 1.00th=[10159], 5.00th=[16188], 10.00th=[16712], 20.00th=[16909], 00:10:42.590 | 30.00th=[17957], 40.00th=[18482], 50.00th=[23725], 60.00th=[36439], 00:10:42.590 | 70.00th=[58459], 80.00th=[66323], 90.00th=[79168], 95.00th=[80217], 00:10:42.590 | 99.00th=[85459], 99.50th=[85459], 99.90th=[85459], 99.95th=[85459], 00:10:42.590 | 99.99th=[85459] 00:10:42.590 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:10:42.590 slat (usec): min=6, max=22366, avg=225.43, stdev=1499.47 00:10:42.590 clat (usec): min=7112, max=53133, avg=24301.68, stdev=13730.27 00:10:42.590 lat (usec): min=7154, max=58471, avg=24527.11, stdev=13823.44 00:10:42.590 clat percentiles (usec): 00:10:42.590 | 1.00th=[ 9896], 5.00th=[11731], 10.00th=[11994], 20.00th=[12911], 00:10:42.590 | 30.00th=[13698], 40.00th=[15008], 50.00th=[16319], 60.00th=[23462], 00:10:42.590 | 70.00th=[31589], 80.00th=[41681], 90.00th=[46924], 95.00th=[50070], 00:10:42.590 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:10:42.590 | 99.99th=[53216] 00:10:42.590 bw ( KiB/s): min= 8192, max= 8192, per=8.60%, avg=8192.00, stdev= 0.00, samples=2 00:10:42.590 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:42.590 lat (usec) : 1000=0.02% 00:10:42.590 lat (msec) : 10=1.30%, 20=49.32%, 50=30.73%, 100=18.62% 00:10:42.590 cpu : usr=2.38%, sys=2.08%, ctx=130, majf=0, minf=1 00:10:42.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:42.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.591 issued rwts: total=2017,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.591 job3: (groupid=0, jobs=1): err= 0: pid=2358433: Sat Dec 7 11:22:41 2024 00:10:42.591 read: IOPS=5837, BW=22.8MiB/s (23.9MB/s)(23.0MiB/1008msec) 00:10:42.591 slat (nsec): min=983, max=9313.3k, avg=84041.02, stdev=611316.58 00:10:42.591 clat (usec): min=1870, max=23161, avg=10537.76, stdev=2653.96 00:10:42.591 lat (usec): min=5504, max=23165, avg=10621.80, stdev=2706.60 00:10:42.591 clat percentiles (usec): 00:10:42.591 | 1.00th=[ 6783], 5.00th=[ 7963], 10.00th=[ 8356], 20.00th=[ 8848], 00:10:42.591 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:10:42.591 | 70.00th=[10421], 80.00th=[11076], 90.00th=[13960], 95.00th=[16450], 00:10:42.591 | 99.00th=[22152], 99.50th=[22676], 99.90th=[23200], 99.95th=[23200], 00:10:42.591 | 99.99th=[23200] 00:10:42.591 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:10:42.591 slat (nsec): min=1678, max=7639.5k, avg=74403.41, stdev=459761.37 00:10:42.591 clat (usec): min=1233, max=23830, avg=10723.49, stdev=4692.62 00:10:42.591 lat (usec): min=1245, max=23832, avg=10797.89, stdev=4724.74 00:10:42.591 clat percentiles (usec): 00:10:42.591 | 1.00th=[ 3490], 5.00th=[ 5014], 10.00th=[ 5604], 20.00th=[ 6521], 00:10:42.591 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 9110], 60.00th=[10945], 00:10:42.591 | 70.00th=[13173], 80.00th=[15795], 90.00th=[16712], 95.00th=[19792], 00:10:42.591 | 99.00th=[22152], 99.50th=[23200], 99.90th=[23725], 99.95th=[23725], 00:10:42.591 | 99.99th=[23725] 00:10:42.591 bw ( KiB/s): min=24576, max=24576, per=25.81%, avg=24576.00, stdev= 0.00, samples=2 00:10:42.591 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:10:42.591 lat (msec) : 2=0.10%, 4=0.57%, 10=53.72%, 20=42.28%, 50=3.34% 00:10:42.591 cpu : usr=4.97%, sys=6.85%, ctx=429, majf=0, minf=1 00:10:42.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:42.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.591 issued rwts: total=5884,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.591 00:10:42.591 Run status group 0 (all jobs): 00:10:42.591 READ: bw=88.5MiB/s (92.8MB/s), 7996KiB/s-34.3MiB/s (8188kB/s-36.0MB/s), io=89.4MiB (93.7MB), run=1005-1010msec 00:10:42.591 WRITE: bw=93.0MiB/s (97.5MB/s), 8119KiB/s-35.8MiB/s (8314kB/s-37.6MB/s), io=93.9MiB (98.5MB), run=1005-1010msec 00:10:42.591 00:10:42.591 Disk stats (read/write): 00:10:42.591 nvme0n1: ios=7336/7680, merge=0/0, ticks=52898/47742, in_queue=100640, util=87.17% 00:10:42.591 nvme0n2: ios=5162/5415, merge=0/0, ticks=52592/49729, in_queue=102321, util=91.23% 00:10:42.591 nvme0n3: ios=1555/1696, merge=0/0, ticks=16665/9961, in_queue=26626, util=92.40% 00:10:42.591 nvme0n4: ios=4996/5120, merge=0/0, ticks=49469/51719, in_queue=101188, util=97.33% 00:10:42.591 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:42.852 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2358621 00:10:42.852 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:42.852 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:42.852 [global] 00:10:42.852 thread=1 00:10:42.852 invalidate=1 00:10:42.852 rw=read 00:10:42.852 time_based=1 00:10:42.852 runtime=10 00:10:42.852 ioengine=libaio 00:10:42.852 direct=1 00:10:42.852 bs=4096 00:10:42.852 iodepth=1 00:10:42.852 norandommap=1 00:10:42.852 numjobs=1 00:10:42.852 00:10:42.852 [job0] 00:10:42.852 filename=/dev/nvme0n1 00:10:42.852 [job1] 00:10:42.852 filename=/dev/nvme0n2 00:10:42.852 [job2] 00:10:42.852 filename=/dev/nvme0n3 00:10:42.852 [job3] 00:10:42.852 filename=/dev/nvme0n4 00:10:42.852 Could not set queue depth (nvme0n1) 00:10:42.852 Could not set queue depth (nvme0n2) 00:10:42.852 Could not set queue depth (nvme0n3) 00:10:42.852 Could not set queue depth (nvme0n4) 00:10:43.113 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.113 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.113 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.113 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.113 fio-3.35 00:10:43.113 Starting 4 threads 00:10:45.660 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:45.926 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10280960, buflen=4096 00:10:45.926 fio: pid=2358963, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:45.926 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:46.186 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:10:46.186 fio: pid=2358962, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:46.186 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.186 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:46.186 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10211328, buflen=4096 00:10:46.186 fio: pid=2358960, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:46.445 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.445 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:46.445 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=487424, buflen=4096 00:10:46.445 fio: pid=2358961, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:46.445 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.445 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:46.705 00:10:46.705 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2358960: Sat Dec 7 11:22:45 2024 00:10:46.706 read: IOPS=846, BW=3385KiB/s (3466kB/s)(9972KiB/2946msec) 00:10:46.706 slat (usec): min=6, max=24270, avg=53.53, stdev=775.75 00:10:46.706 clat (usec): min=387, max=42892, avg=1113.69, stdev=3707.38 00:10:46.706 lat (usec): min=413, max=42917, avg=1161.11, stdev=3773.74 00:10:46.706 clat percentiles (usec): 00:10:46.706 | 1.00th=[ 510], 5.00th=[ 594], 10.00th=[ 660], 20.00th=[ 725], 00:10:46.706 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 799], 00:10:46.706 | 70.00th=[ 816], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 889], 00:10:46.706 | 99.00th=[ 1172], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:46.706 | 99.99th=[42730] 00:10:46.706 bw ( KiB/s): min= 96, max= 5016, per=50.56%, avg=3304.00, stdev=1995.64, samples=5 00:10:46.706 iops : min= 24, max= 1254, avg=826.00, stdev=498.91, samples=5 00:10:46.706 lat (usec) : 500=0.72%, 750=27.23%, 1000=70.21% 00:10:46.706 lat (msec) : 2=0.96%, 50=0.84% 00:10:46.706 cpu : usr=0.61%, sys=2.51%, ctx=2499, majf=0, minf=1 00:10:46.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.706 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.706 issued rwts: total=2494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.706 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2358961: Sat Dec 7 11:22:45 2024 00:10:46.706 read: IOPS=37, BW=150KiB/s (153kB/s)(476KiB/3176msec) 00:10:46.706 slat (usec): min=20, max=14517, avg=147.96, stdev=1322.92 00:10:46.706 clat (usec): min=820, max=46037, avg=26288.26, stdev=20114.41 00:10:46.706 lat (usec): min=844, max=56935, avg=26437.26, stdev=20257.18 00:10:46.706 clat percentiles (usec): 00:10:46.706 | 1.00th=[ 873], 5.00th=[ 947], 10.00th=[ 1012], 20.00th=[ 1074], 00:10:46.706 | 30.00th=[ 1106], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:10:46.706 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:10:46.706 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:10:46.706 | 99.99th=[45876] 00:10:46.706 bw ( KiB/s): min= 89, max= 432, per=2.34%, avg=153.50, stdev=136.55, samples=6 00:10:46.706 iops : min= 22, max= 108, avg=38.33, stdev=34.16, samples=6 00:10:46.706 lat (usec) : 1000=8.33% 00:10:46.706 lat (msec) : 2=30.00%, 50=60.83% 00:10:46.706 cpu : usr=0.00%, sys=0.16%, ctx=122, majf=0, minf=2 00:10:46.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.706 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.706 issued rwts: total=120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.706 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2358962: Sat Dec 7 11:22:45 2024 00:10:46.706 read: IOPS=24, BW=97.0KiB/s (99.4kB/s)(268KiB/2762msec) 00:10:46.706 slat (nsec): min=25630, max=34606, avg=26064.31, stdev=1073.66 00:10:46.706 clat (usec): min=952, max=43012, avg=40873.86, stdev=4984.56 00:10:46.706 lat (usec): min=986, max=43037, avg=40899.92, stdev=4983.50 00:10:46.706 clat percentiles (usec): 00:10:46.706 | 1.00th=[ 955], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:46.706 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:46.706 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:46.706 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:46.706 | 99.99th=[43254] 00:10:46.706 bw ( KiB/s): min= 96, max= 104, per=1.48%, avg=97.60, stdev= 3.58, samples=5 00:10:46.706 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:46.706 lat (usec) : 1000=1.47% 00:10:46.706 lat (msec) : 50=97.06% 00:10:46.706 cpu : usr=0.11%, sys=0.00%, ctx=68, majf=0, minf=2 00:10:46.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.706 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.706 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.706 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2358963: Sat Dec 7 11:22:45 2024 00:10:46.706 read: IOPS=975, BW=3899KiB/s (3993kB/s)(9.80MiB/2575msec) 00:10:46.706 slat (nsec): min=6721, max=60482, avg=25251.61, stdev=5450.57 00:10:46.706 clat (usec): min=401, max=42940, avg=985.68, stdev=2599.72 00:10:46.706 lat (usec): min=427, max=42965, avg=1010.93, stdev=2599.67 00:10:46.706 clat percentiles (usec): 00:10:46.706 | 1.00th=[ 519], 5.00th=[ 586], 10.00th=[ 644], 20.00th=[ 709], 00:10:46.706 | 30.00th=[ 758], 40.00th=[ 807], 50.00th=[ 848], 60.00th=[ 873], 00:10:46.706 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 963], 95.00th=[ 996], 00:10:46.706 | 99.00th=[ 1090], 99.50th=[ 1156], 99.90th=[42206], 99.95th=[42206], 00:10:46.706 | 99.99th=[42730] 00:10:46.706 bw ( KiB/s): min= 1800, max= 5032, per=59.92%, avg=3916.80, stdev=1353.18, samples=5 00:10:46.706 iops : min= 450, max= 1258, avg=979.20, stdev=338.29, samples=5 00:10:46.706 lat (usec) : 500=0.76%, 750=27.12%, 1000=67.82% 00:10:46.706 lat (msec) : 2=3.86%, 50=0.40% 00:10:46.706 cpu : usr=1.17%, sys=2.68%, ctx=2511, majf=0, minf=2 00:10:46.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.706 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.706 issued rwts: total=2511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.706 00:10:46.706 Run status group 0 (all jobs): 00:10:46.706 READ: bw=6535KiB/s (6692kB/s), 97.0KiB/s-3899KiB/s (99.4kB/s-3993kB/s), io=20.3MiB (21.3MB), run=2575-3176msec 00:10:46.706 00:10:46.706 Disk stats (read/write): 00:10:46.706 nvme0n1: ios=2429/0, merge=0/0, ticks=2683/0, in_queue=2683, util=93.49% 00:10:46.706 nvme0n2: ios=117/0, merge=0/0, ticks=3044/0, in_queue=3044, util=95.23% 00:10:46.706 nvme0n3: ios=63/0, merge=0/0, ticks=2576/0, in_queue=2576, util=96.03% 00:10:46.706 nvme0n4: ios=2240/0, merge=0/0, ticks=2207/0, in_queue=2207, util=96.06% 00:10:46.706 11:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.706 11:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:46.968 11:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.968 11:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:47.229 11:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:47.229 11:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:47.489 11:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:47.489 11:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:47.750 11:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:47.750 11:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2358621 00:10:47.750 11:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:47.750 11:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.323 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.323 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:48.323 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:48.323 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.323 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:48.323 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.323 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:48.323 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:48.324 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:48.324 nvmf hotplug test: fio failed as expected 00:10:48.324 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:48.585 rmmod nvme_tcp 00:10:48.585 rmmod nvme_fabrics 00:10:48.585 rmmod nvme_keyring 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2354936 ']' 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2354936 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2354936 ']' 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2354936 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2354936 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2354936' 00:10:48.585 killing process with pid 2354936 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2354936 00:10:48.585 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2354936 00:10:49.529 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:49.529 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:49.529 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:49.529 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:49.529 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:49.529 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:49.529 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:49.529 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:49.529 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:49.529 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.529 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.529 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.445 11:22:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:51.445 00:10:51.445 real 0m30.836s 00:10:51.445 user 2m44.965s 00:10:51.445 sys 0m9.629s 00:10:51.445 11:22:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.445 11:22:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.445 ************************************ 00:10:51.445 END TEST nvmf_fio_target 00:10:51.445 ************************************ 00:10:51.445 11:22:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:51.445 11:22:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:51.445 11:22:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.445 11:22:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:51.707 ************************************ 00:10:51.707 START TEST nvmf_bdevio 00:10:51.707 ************************************ 00:10:51.707 11:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:51.707 * Looking for test storage... 00:10:51.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.707 11:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:51.707 11:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:51.707 11:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:51.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.707 --rc genhtml_branch_coverage=1 00:10:51.707 --rc genhtml_function_coverage=1 00:10:51.707 --rc genhtml_legend=1 00:10:51.707 --rc geninfo_all_blocks=1 00:10:51.707 --rc geninfo_unexecuted_blocks=1 00:10:51.707 00:10:51.707 ' 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:51.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.707 --rc genhtml_branch_coverage=1 00:10:51.707 --rc genhtml_function_coverage=1 00:10:51.707 --rc genhtml_legend=1 00:10:51.707 --rc geninfo_all_blocks=1 00:10:51.707 --rc geninfo_unexecuted_blocks=1 00:10:51.707 00:10:51.707 ' 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:51.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.707 --rc genhtml_branch_coverage=1 00:10:51.707 --rc genhtml_function_coverage=1 00:10:51.707 --rc genhtml_legend=1 00:10:51.707 --rc geninfo_all_blocks=1 00:10:51.707 --rc geninfo_unexecuted_blocks=1 00:10:51.707 00:10:51.707 ' 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:51.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.707 --rc genhtml_branch_coverage=1 00:10:51.707 --rc genhtml_function_coverage=1 00:10:51.707 --rc genhtml_legend=1 00:10:51.707 --rc geninfo_all_blocks=1 00:10:51.707 --rc geninfo_unexecuted_blocks=1 00:10:51.707 00:10:51.707 ' 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:51.707 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.969 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:51.969 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:51.969 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:51.969 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.969 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.969 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.969 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:51.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:51.970 11:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.108 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.108 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:00.108 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:00.108 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:00.108 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:00.108 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:00.108 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:00.108 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:00.108 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:00.108 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:00.108 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:00.108 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:00.109 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:00.109 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:00.109 Found net devices under 0000:31:00.0: cvl_0_0 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:00.109 Found net devices under 0000:31:00.1: cvl_0_1 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:00.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:11:00.109 00:11:00.109 --- 10.0.0.2 ping statistics --- 00:11:00.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.109 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:11:00.109 00:11:00.109 --- 10.0.0.1 ping statistics --- 00:11:00.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.109 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2364399 00:11:00.109 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2364399 00:11:00.110 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:00.110 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2364399 ']' 00:11:00.110 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.110 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.110 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.110 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.110 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.110 [2024-12-07 11:22:58.735866] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:00.110 [2024-12-07 11:22:58.735995] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.110 [2024-12-07 11:22:58.905074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.110 [2024-12-07 11:22:59.029479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.110 [2024-12-07 11:22:59.029542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.110 [2024-12-07 11:22:59.029555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.110 [2024-12-07 11:22:59.029568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.110 [2024-12-07 11:22:59.029579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.110 [2024-12-07 11:22:59.032380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:00.110 [2024-12-07 11:22:59.032523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:00.110 [2024-12-07 11:22:59.032628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.110 [2024-12-07 11:22:59.032653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.375 [2024-12-07 11:22:59.567893] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.375 Malloc0 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.375 [2024-12-07 11:22:59.686079] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:00.375 { 00:11:00.375 "params": { 00:11:00.375 "name": "Nvme$subsystem", 00:11:00.375 "trtype": "$TEST_TRANSPORT", 00:11:00.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:00.375 "adrfam": "ipv4", 00:11:00.375 "trsvcid": "$NVMF_PORT", 00:11:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:00.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:00.375 "hdgst": ${hdgst:-false}, 00:11:00.375 "ddgst": ${ddgst:-false} 00:11:00.375 }, 00:11:00.375 "method": "bdev_nvme_attach_controller" 00:11:00.375 } 00:11:00.375 EOF 00:11:00.375 )") 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:00.375 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:00.375 "params": { 00:11:00.375 "name": "Nvme1", 00:11:00.375 "trtype": "tcp", 00:11:00.375 "traddr": "10.0.0.2", 00:11:00.375 "adrfam": "ipv4", 00:11:00.375 "trsvcid": "4420", 00:11:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:00.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:00.375 "hdgst": false, 00:11:00.375 "ddgst": false 00:11:00.375 }, 00:11:00.375 "method": "bdev_nvme_attach_controller" 00:11:00.375 }' 00:11:00.638 [2024-12-07 11:22:59.777762] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:00.638 [2024-12-07 11:22:59.777880] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364702 ] 00:11:00.638 [2024-12-07 11:22:59.916956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:00.900 [2024-12-07 11:23:00.020904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.900 [2024-12-07 11:23:00.020986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.900 [2024-12-07 11:23:00.020994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.161 I/O targets: 00:11:01.161 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:01.161 00:11:01.161 00:11:01.161 CUnit - A unit testing framework for C - Version 2.1-3 00:11:01.161 http://cunit.sourceforge.net/ 00:11:01.161 00:11:01.161 00:11:01.161 Suite: bdevio tests on: Nvme1n1 00:11:01.421 Test: blockdev write read block ...passed 00:11:01.421 Test: blockdev write zeroes read block ...passed 00:11:01.421 Test: blockdev write zeroes read no split ...passed 00:11:01.421 Test: blockdev write zeroes read split ...passed 00:11:01.421 Test: blockdev write zeroes read split partial ...passed 00:11:01.421 Test: blockdev reset ...[2024-12-07 11:23:00.752125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:01.422 [2024-12-07 11:23:00.752239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039ec00 (9): Bad file descriptor 00:11:01.682 [2024-12-07 11:23:00.896778] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:01.682 passed 00:11:01.682 Test: blockdev write read 8 blocks ...passed 00:11:01.682 Test: blockdev write read size > 128k ...passed 00:11:01.682 Test: blockdev write read invalid size ...passed 00:11:01.682 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:01.682 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:01.682 Test: blockdev write read max offset ...passed 00:11:01.943 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:01.944 Test: blockdev writev readv 8 blocks ...passed 00:11:01.944 Test: blockdev writev readv 30 x 1block ...passed 00:11:01.944 Test: blockdev writev readv block ...passed 00:11:01.944 Test: blockdev writev readv size > 128k ...passed 00:11:01.944 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:01.944 Test: blockdev comparev and writev ...[2024-12-07 11:23:01.166436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.944 [2024-12-07 11:23:01.166473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:01.944 [2024-12-07 11:23:01.166490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.944 [2024-12-07 11:23:01.166499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:01.944 [2024-12-07 11:23:01.167062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.944 [2024-12-07 11:23:01.167078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:01.944 [2024-12-07 11:23:01.167095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.944 [2024-12-07 11:23:01.167103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:01.944 [2024-12-07 11:23:01.167606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.944 [2024-12-07 11:23:01.167624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:01.944 [2024-12-07 11:23:01.167637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.944 [2024-12-07 11:23:01.167647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:01.944 [2024-12-07 11:23:01.168170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.944 [2024-12-07 11:23:01.168183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:01.944 [2024-12-07 11:23:01.168196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.944 [2024-12-07 11:23:01.168203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:01.944 passed 00:11:01.944 Test: blockdev nvme passthru rw ...passed 00:11:01.944 Test: blockdev nvme passthru vendor specific ...[2024-12-07 11:23:01.252911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.944 [2024-12-07 11:23:01.252937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:01.944 [2024-12-07 11:23:01.253303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.944 [2024-12-07 11:23:01.253315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:01.944 [2024-12-07 11:23:01.253742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.944 [2024-12-07 11:23:01.253753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:01.944 [2024-12-07 11:23:01.254104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.944 [2024-12-07 11:23:01.254116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:01.944 passed 00:11:01.944 Test: blockdev nvme admin passthru ...passed 00:11:02.205 Test: blockdev copy ...passed 00:11:02.205 00:11:02.205 Run Summary: Type Total Ran Passed Failed Inactive 00:11:02.205 suites 1 1 n/a 0 0 00:11:02.205 tests 23 23 23 0 0 00:11:02.205 asserts 152 152 152 0 n/a 00:11:02.205 00:11:02.205 Elapsed time = 1.778 seconds 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:02.776 rmmod nvme_tcp 00:11:02.776 rmmod nvme_fabrics 00:11:02.776 rmmod nvme_keyring 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2364399 ']' 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2364399 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2364399 ']' 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2364399 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.776 11:23:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2364399 00:11:02.776 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:02.776 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:02.776 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2364399' 00:11:02.776 killing process with pid 2364399 00:11:02.776 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2364399 00:11:02.776 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2364399 00:11:03.717 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.717 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.717 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.717 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:03.717 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.717 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:03.717 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.717 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.717 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:03.717 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.717 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.717 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.629 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:05.629 00:11:05.629 real 0m13.971s 00:11:05.629 user 0m20.838s 00:11:05.629 sys 0m6.613s 00:11:05.629 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.629 11:23:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.629 ************************************ 00:11:05.629 END TEST nvmf_bdevio 00:11:05.629 ************************************ 00:11:05.629 11:23:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:05.629 00:11:05.629 real 5m18.169s 00:11:05.629 user 12m31.780s 00:11:05.629 sys 1m51.638s 00:11:05.629 11:23:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.629 11:23:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:05.629 ************************************ 00:11:05.629 END TEST nvmf_target_core 00:11:05.629 ************************************ 00:11:05.629 11:23:04 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:05.629 11:23:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:05.629 11:23:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.629 11:23:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:05.629 ************************************ 00:11:05.629 START TEST nvmf_target_extra 00:11:05.629 ************************************ 00:11:05.629 11:23:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:05.889 * Looking for test storage... 00:11:05.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:05.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.889 --rc genhtml_branch_coverage=1 00:11:05.889 --rc genhtml_function_coverage=1 00:11:05.889 --rc genhtml_legend=1 00:11:05.889 --rc geninfo_all_blocks=1 00:11:05.889 --rc geninfo_unexecuted_blocks=1 00:11:05.889 00:11:05.889 ' 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:05.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.889 --rc genhtml_branch_coverage=1 00:11:05.889 --rc genhtml_function_coverage=1 00:11:05.889 --rc genhtml_legend=1 00:11:05.889 --rc geninfo_all_blocks=1 00:11:05.889 --rc geninfo_unexecuted_blocks=1 00:11:05.889 00:11:05.889 ' 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:05.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.889 --rc genhtml_branch_coverage=1 00:11:05.889 --rc genhtml_function_coverage=1 00:11:05.889 --rc genhtml_legend=1 00:11:05.889 --rc geninfo_all_blocks=1 00:11:05.889 --rc geninfo_unexecuted_blocks=1 00:11:05.889 00:11:05.889 ' 00:11:05.889 11:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:05.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.889 --rc genhtml_branch_coverage=1 00:11:05.889 --rc genhtml_function_coverage=1 00:11:05.889 --rc genhtml_legend=1 00:11:05.889 --rc geninfo_all_blocks=1 00:11:05.889 --rc geninfo_unexecuted_blocks=1 00:11:05.889 00:11:05.889 ' 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:05.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:05.890 ************************************ 00:11:05.890 START TEST nvmf_example 00:11:05.890 ************************************ 00:11:05.890 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:06.151 * Looking for test storage... 00:11:06.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.151 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:06.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.152 --rc genhtml_branch_coverage=1 00:11:06.152 --rc genhtml_function_coverage=1 00:11:06.152 --rc genhtml_legend=1 00:11:06.152 --rc geninfo_all_blocks=1 00:11:06.152 --rc geninfo_unexecuted_blocks=1 00:11:06.152 00:11:06.152 ' 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:06.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.152 --rc genhtml_branch_coverage=1 00:11:06.152 --rc genhtml_function_coverage=1 00:11:06.152 --rc genhtml_legend=1 00:11:06.152 --rc geninfo_all_blocks=1 00:11:06.152 --rc geninfo_unexecuted_blocks=1 00:11:06.152 00:11:06.152 ' 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:06.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.152 --rc genhtml_branch_coverage=1 00:11:06.152 --rc genhtml_function_coverage=1 00:11:06.152 --rc genhtml_legend=1 00:11:06.152 --rc geninfo_all_blocks=1 00:11:06.152 --rc geninfo_unexecuted_blocks=1 00:11:06.152 00:11:06.152 ' 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:06.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.152 --rc genhtml_branch_coverage=1 00:11:06.152 --rc genhtml_function_coverage=1 00:11:06.152 --rc genhtml_legend=1 00:11:06.152 --rc geninfo_all_blocks=1 00:11:06.152 --rc geninfo_unexecuted_blocks=1 00:11:06.152 00:11:06.152 ' 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:06.152 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:14.286 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:14.286 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:14.286 Found net devices under 0000:31:00.0: cvl_0_0 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.286 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:14.287 Found net devices under 0000:31:00.1: cvl_0_1 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:14.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:11:14.287 00:11:14.287 --- 10.0.0.2 ping statistics --- 00:11:14.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.287 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:11:14.287 00:11:14.287 --- 10.0.0.1 ping statistics --- 00:11:14.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.287 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2369560 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2369560 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2369560 ']' 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.287 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.548 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.548 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:14.548 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:14.548 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.548 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.548 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:14.548 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.548 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.548 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.548 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:14.548 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.548 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:14.809 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:24.813 Initializing NVMe Controllers 00:11:24.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:24.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:24.813 Initialization complete. Launching workers. 00:11:24.813 ======================================================== 00:11:24.813 Latency(us) 00:11:24.813 Device Information : IOPS MiB/s Average min max 00:11:24.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17218.33 67.26 3716.40 882.32 16246.52 00:11:24.813 ======================================================== 00:11:24.813 Total : 17218.33 67.26 3716.40 882.32 16246.52 00:11:24.813 00:11:25.072 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:25.072 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:25.072 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.072 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:25.072 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.072 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:25.072 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.072 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.072 rmmod nvme_tcp 00:11:25.072 rmmod nvme_fabrics 00:11:25.072 rmmod nvme_keyring 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2369560 ']' 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2369560 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2369560 ']' 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2369560 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2369560 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2369560' 00:11:25.073 killing process with pid 2369560 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2369560 00:11:25.073 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2369560 00:11:26.011 nvmf threads initialize successfully 00:11:26.011 bdev subsystem init successfully 00:11:26.011 created a nvmf target service 00:11:26.011 create targets's poll groups done 00:11:26.011 all subsystems of target started 00:11:26.011 nvmf target is running 00:11:26.011 all subsystems of target stopped 00:11:26.011 destroy targets's poll groups done 00:11:26.011 destroyed the nvmf target service 00:11:26.011 bdev subsystem finish successfully 00:11:26.011 nvmf threads destroy successfully 00:11:26.011 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.011 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:26.011 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:26.011 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:26.011 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:26.011 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:26.011 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:26.011 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.011 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.011 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.011 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.011 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.560 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.560 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:28.560 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.560 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.560 00:11:28.560 real 0m22.157s 00:11:28.560 user 0m48.958s 00:11:28.560 sys 0m6.915s 00:11:28.560 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.560 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.560 ************************************ 00:11:28.560 END TEST nvmf_example 00:11:28.560 ************************************ 00:11:28.560 11:23:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:28.560 11:23:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.561 ************************************ 00:11:28.561 START TEST nvmf_filesystem 00:11:28.561 ************************************ 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:28.561 * Looking for test storage... 00:11:28.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:28.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.561 --rc genhtml_branch_coverage=1 00:11:28.561 --rc genhtml_function_coverage=1 00:11:28.561 --rc genhtml_legend=1 00:11:28.561 --rc geninfo_all_blocks=1 00:11:28.561 --rc geninfo_unexecuted_blocks=1 00:11:28.561 00:11:28.561 ' 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:28.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.561 --rc genhtml_branch_coverage=1 00:11:28.561 --rc genhtml_function_coverage=1 00:11:28.561 --rc genhtml_legend=1 00:11:28.561 --rc geninfo_all_blocks=1 00:11:28.561 --rc geninfo_unexecuted_blocks=1 00:11:28.561 00:11:28.561 ' 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:28.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.561 --rc genhtml_branch_coverage=1 00:11:28.561 --rc genhtml_function_coverage=1 00:11:28.561 --rc genhtml_legend=1 00:11:28.561 --rc geninfo_all_blocks=1 00:11:28.561 --rc geninfo_unexecuted_blocks=1 00:11:28.561 00:11:28.561 ' 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:28.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.561 --rc genhtml_branch_coverage=1 00:11:28.561 --rc genhtml_function_coverage=1 00:11:28.561 --rc genhtml_legend=1 00:11:28.561 --rc geninfo_all_blocks=1 00:11:28.561 --rc geninfo_unexecuted_blocks=1 00:11:28.561 00:11:28.561 ' 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:28.561 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:28.562 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:28.562 #define SPDK_CONFIG_H 00:11:28.562 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:28.562 #define SPDK_CONFIG_APPS 1 00:11:28.562 #define SPDK_CONFIG_ARCH native 00:11:28.562 #define SPDK_CONFIG_ASAN 1 00:11:28.562 #undef SPDK_CONFIG_AVAHI 00:11:28.562 #undef SPDK_CONFIG_CET 00:11:28.562 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:28.562 #define SPDK_CONFIG_COVERAGE 1 00:11:28.562 #define SPDK_CONFIG_CROSS_PREFIX 00:11:28.562 #undef SPDK_CONFIG_CRYPTO 00:11:28.562 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:28.562 #undef SPDK_CONFIG_CUSTOMOCF 00:11:28.562 #undef SPDK_CONFIG_DAOS 00:11:28.562 #define SPDK_CONFIG_DAOS_DIR 00:11:28.562 #define SPDK_CONFIG_DEBUG 1 00:11:28.562 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:28.562 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:28.562 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:28.562 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:28.562 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:28.562 #undef SPDK_CONFIG_DPDK_UADK 00:11:28.562 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:28.562 #define SPDK_CONFIG_EXAMPLES 1 00:11:28.562 #undef SPDK_CONFIG_FC 00:11:28.562 #define SPDK_CONFIG_FC_PATH 00:11:28.562 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:28.562 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:28.562 #define SPDK_CONFIG_FSDEV 1 00:11:28.562 #undef SPDK_CONFIG_FUSE 00:11:28.562 #undef SPDK_CONFIG_FUZZER 00:11:28.562 #define SPDK_CONFIG_FUZZER_LIB 00:11:28.562 #undef SPDK_CONFIG_GOLANG 00:11:28.562 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:28.562 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:28.562 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:28.562 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:28.562 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:28.562 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:28.562 #undef SPDK_CONFIG_HAVE_LZ4 00:11:28.562 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:28.562 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:28.562 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:28.562 #define SPDK_CONFIG_IDXD 1 00:11:28.562 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:28.562 #undef SPDK_CONFIG_IPSEC_MB 00:11:28.562 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:28.562 #define SPDK_CONFIG_ISAL 1 00:11:28.562 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:28.563 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:28.563 #define SPDK_CONFIG_LIBDIR 00:11:28.563 #undef SPDK_CONFIG_LTO 00:11:28.563 #define SPDK_CONFIG_MAX_LCORES 128 00:11:28.563 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:28.563 #define SPDK_CONFIG_NVME_CUSE 1 00:11:28.563 #undef SPDK_CONFIG_OCF 00:11:28.563 #define SPDK_CONFIG_OCF_PATH 00:11:28.563 #define SPDK_CONFIG_OPENSSL_PATH 00:11:28.563 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:28.563 #define SPDK_CONFIG_PGO_DIR 00:11:28.563 #undef SPDK_CONFIG_PGO_USE 00:11:28.563 #define SPDK_CONFIG_PREFIX /usr/local 00:11:28.563 #undef SPDK_CONFIG_RAID5F 00:11:28.563 #undef SPDK_CONFIG_RBD 00:11:28.563 #define SPDK_CONFIG_RDMA 1 00:11:28.563 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:28.563 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:28.563 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:28.563 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:28.563 #define SPDK_CONFIG_SHARED 1 00:11:28.563 #undef SPDK_CONFIG_SMA 00:11:28.563 #define SPDK_CONFIG_TESTS 1 00:11:28.563 #undef SPDK_CONFIG_TSAN 00:11:28.563 #define SPDK_CONFIG_UBLK 1 00:11:28.563 #define SPDK_CONFIG_UBSAN 1 00:11:28.563 #undef SPDK_CONFIG_UNIT_TESTS 00:11:28.563 #undef SPDK_CONFIG_URING 00:11:28.563 #define SPDK_CONFIG_URING_PATH 00:11:28.563 #undef SPDK_CONFIG_URING_ZNS 00:11:28.563 #undef SPDK_CONFIG_USDT 00:11:28.563 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:28.563 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:28.563 #undef SPDK_CONFIG_VFIO_USER 00:11:28.563 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:28.563 #define SPDK_CONFIG_VHOST 1 00:11:28.563 #define SPDK_CONFIG_VIRTIO 1 00:11:28.563 #undef SPDK_CONFIG_VTUNE 00:11:28.563 #define SPDK_CONFIG_VTUNE_DIR 00:11:28.563 #define SPDK_CONFIG_WERROR 1 00:11:28.563 #define SPDK_CONFIG_WPDK_DIR 00:11:28.563 #undef SPDK_CONFIG_XNVME 00:11:28.563 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:28.563 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.564 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2372605 ]] 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2372605 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:28.565 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.rQA9mP 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.rQA9mP/tests/target /tmp/spdk.rQA9mP 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=123102347264 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356558336 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6254211072 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666910720 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678277120 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847894016 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871314944 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23420928 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=175104 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=328704 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677797888 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678281216 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=483328 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935643136 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935655424 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:28.566 * Looking for test storage... 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=123102347264 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8468803584 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.566 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.567 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:28.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.828 --rc genhtml_branch_coverage=1 00:11:28.828 --rc genhtml_function_coverage=1 00:11:28.828 --rc genhtml_legend=1 00:11:28.828 --rc geninfo_all_blocks=1 00:11:28.828 --rc geninfo_unexecuted_blocks=1 00:11:28.828 00:11:28.828 ' 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:28.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.828 --rc genhtml_branch_coverage=1 00:11:28.828 --rc genhtml_function_coverage=1 00:11:28.828 --rc genhtml_legend=1 00:11:28.828 --rc geninfo_all_blocks=1 00:11:28.828 --rc geninfo_unexecuted_blocks=1 00:11:28.828 00:11:28.828 ' 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:28.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.828 --rc genhtml_branch_coverage=1 00:11:28.828 --rc genhtml_function_coverage=1 00:11:28.828 --rc genhtml_legend=1 00:11:28.828 --rc geninfo_all_blocks=1 00:11:28.828 --rc geninfo_unexecuted_blocks=1 00:11:28.828 00:11:28.828 ' 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:28.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.828 --rc genhtml_branch_coverage=1 00:11:28.828 --rc genhtml_function_coverage=1 00:11:28.828 --rc genhtml_legend=1 00:11:28.828 --rc geninfo_all_blocks=1 00:11:28.828 --rc geninfo_unexecuted_blocks=1 00:11:28.828 00:11:28.828 ' 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.828 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:28.829 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:36.971 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:36.971 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.971 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:36.972 Found net devices under 0000:31:00.0: cvl_0_0 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:36.972 Found net devices under 0000:31:00.1: cvl_0_1 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:36.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:11:36.972 00:11:36.972 --- 10.0.0.2 ping statistics --- 00:11:36.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.972 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:11:36.972 00:11:36.972 --- 10.0.0.1 ping statistics --- 00:11:36.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.972 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.972 ************************************ 00:11:36.972 START TEST nvmf_filesystem_no_in_capsule 00:11:36.972 ************************************ 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2376390 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2376390 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2376390 ']' 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.972 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.972 [2024-12-07 11:23:35.525876] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:36.972 [2024-12-07 11:23:35.525999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.972 [2024-12-07 11:23:35.677173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.972 [2024-12-07 11:23:35.779552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.972 [2024-12-07 11:23:35.779594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.972 [2024-12-07 11:23:35.779606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.972 [2024-12-07 11:23:35.779618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.972 [2024-12-07 11:23:35.779627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.972 [2024-12-07 11:23:35.782083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.972 [2024-12-07 11:23:35.782139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.972 [2024-12-07 11:23:35.782279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.972 [2024-12-07 11:23:35.782303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.972 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.972 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:36.972 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:36.972 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:36.972 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.234 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.234 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:37.234 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:37.234 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.234 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.234 [2024-12-07 11:23:36.345109] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.234 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.234 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:37.234 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.234 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.494 Malloc1 00:11:37.494 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.494 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.494 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.494 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.494 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.495 [2024-12-07 11:23:36.795292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:37.495 { 00:11:37.495 "name": "Malloc1", 00:11:37.495 "aliases": [ 00:11:37.495 "36da437a-b6f5-4ed0-ba03-f4354929c9ca" 00:11:37.495 ], 00:11:37.495 "product_name": "Malloc disk", 00:11:37.495 "block_size": 512, 00:11:37.495 "num_blocks": 1048576, 00:11:37.495 "uuid": "36da437a-b6f5-4ed0-ba03-f4354929c9ca", 00:11:37.495 "assigned_rate_limits": { 00:11:37.495 "rw_ios_per_sec": 0, 00:11:37.495 "rw_mbytes_per_sec": 0, 00:11:37.495 "r_mbytes_per_sec": 0, 00:11:37.495 "w_mbytes_per_sec": 0 00:11:37.495 }, 00:11:37.495 "claimed": true, 00:11:37.495 "claim_type": "exclusive_write", 00:11:37.495 "zoned": false, 00:11:37.495 "supported_io_types": { 00:11:37.495 "read": true, 00:11:37.495 "write": true, 00:11:37.495 "unmap": true, 00:11:37.495 "flush": true, 00:11:37.495 "reset": true, 00:11:37.495 "nvme_admin": false, 00:11:37.495 "nvme_io": false, 00:11:37.495 "nvme_io_md": false, 00:11:37.495 "write_zeroes": true, 00:11:37.495 "zcopy": true, 00:11:37.495 "get_zone_info": false, 00:11:37.495 "zone_management": false, 00:11:37.495 "zone_append": false, 00:11:37.495 "compare": false, 00:11:37.495 "compare_and_write": false, 00:11:37.495 "abort": true, 00:11:37.495 "seek_hole": false, 00:11:37.495 "seek_data": false, 00:11:37.495 "copy": true, 00:11:37.495 "nvme_iov_md": false 00:11:37.495 }, 00:11:37.495 "memory_domains": [ 00:11:37.495 { 00:11:37.495 "dma_device_id": "system", 00:11:37.495 "dma_device_type": 1 00:11:37.495 }, 00:11:37.495 { 00:11:37.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.495 "dma_device_type": 2 00:11:37.495 } 00:11:37.495 ], 00:11:37.495 "driver_specific": {} 00:11:37.495 } 00:11:37.495 ]' 00:11:37.495 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:37.755 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:37.756 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:37.756 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:37.756 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:37.756 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:37.756 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:37.756 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.668 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:39.668 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:39.668 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.668 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:39.668 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:41.582 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:41.843 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.830 ************************************ 00:11:42.830 START TEST filesystem_ext4 00:11:42.830 ************************************ 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:42.830 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:42.830 mke2fs 1.47.0 (5-Feb-2023) 00:11:43.089 Discarding device blocks: 0/522240 done 00:11:43.089 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:43.090 Filesystem UUID: bb3f75ed-d349-4a7b-b30f-4f7623ea51ec 00:11:43.090 Superblock backups stored on blocks: 00:11:43.090 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:43.090 00:11:43.090 Allocating group tables: 0/64 done 00:11:43.090 Writing inode tables: 0/64 done 00:11:43.090 Creating journal (8192 blocks): done 00:11:43.090 Writing superblocks and filesystem accounting information: 0/64 done 00:11:43.090 00:11:43.090 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:43.090 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2376390 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.672 00:11:49.672 real 0m5.780s 00:11:49.672 user 0m0.025s 00:11:49.672 sys 0m0.076s 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:49.672 ************************************ 00:11:49.672 END TEST filesystem_ext4 00:11:49.672 ************************************ 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.672 ************************************ 00:11:49.672 START TEST filesystem_btrfs 00:11:49.672 ************************************ 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:49.672 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:49.672 btrfs-progs v6.8.1 00:11:49.672 See https://btrfs.readthedocs.io for more information. 00:11:49.672 00:11:49.672 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:49.672 NOTE: several default settings have changed in version 5.15, please make sure 00:11:49.672 this does not affect your deployments: 00:11:49.672 - DUP for metadata (-m dup) 00:11:49.672 - enabled no-holes (-O no-holes) 00:11:49.672 - enabled free-space-tree (-R free-space-tree) 00:11:49.672 00:11:49.672 Label: (null) 00:11:49.672 UUID: 12483cb3-e3c4-4bae-8594-dc38152bc346 00:11:49.672 Node size: 16384 00:11:49.672 Sector size: 4096 (CPU page size: 4096) 00:11:49.672 Filesystem size: 510.00MiB 00:11:49.672 Block group profiles: 00:11:49.672 Data: single 8.00MiB 00:11:49.672 Metadata: DUP 32.00MiB 00:11:49.672 System: DUP 8.00MiB 00:11:49.672 SSD detected: yes 00:11:49.672 Zoned device: no 00:11:49.672 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:49.672 Checksum: crc32c 00:11:49.672 Number of devices: 1 00:11:49.672 Devices: 00:11:49.672 ID SIZE PATH 00:11:49.672 1 510.00MiB /dev/nvme0n1p1 00:11:49.672 00:11:49.672 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:49.672 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2376390 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.934 00:11:49.934 real 0m1.241s 00:11:49.934 user 0m0.036s 00:11:49.934 sys 0m0.110s 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:49.934 ************************************ 00:11:49.934 END TEST filesystem_btrfs 00:11:49.934 ************************************ 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.934 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.195 ************************************ 00:11:50.195 START TEST filesystem_xfs 00:11:50.195 ************************************ 00:11:50.195 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:50.195 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:50.195 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:50.195 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:50.195 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:50.195 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:50.195 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:50.195 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:50.195 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:50.195 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:50.195 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:50.195 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:50.195 = sectsz=512 attr=2, projid32bit=1 00:11:50.195 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:50.195 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:50.195 data = bsize=4096 blocks=130560, imaxpct=25 00:11:50.195 = sunit=0 swidth=0 blks 00:11:50.195 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:50.195 log =internal log bsize=4096 blocks=16384, version=2 00:11:50.195 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:50.195 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:51.138 Discarding blocks...Done. 00:11:51.138 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:51.138 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2376390 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.684 00:11:53.684 real 0m3.381s 00:11:53.684 user 0m0.027s 00:11:53.684 sys 0m0.080s 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:53.684 ************************************ 00:11:53.684 END TEST filesystem_xfs 00:11:53.684 ************************************ 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:53.684 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:53.946 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.946 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.946 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:53.946 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:53.946 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.946 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:53.946 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.207 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:54.207 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.207 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.207 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.207 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.208 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:54.208 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2376390 00:11:54.208 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2376390 ']' 00:11:54.208 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2376390 00:11:54.208 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:54.208 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.208 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2376390 00:11:54.208 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.208 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.208 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2376390' 00:11:54.208 killing process with pid 2376390 00:11:54.208 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2376390 00:11:54.208 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2376390 00:11:56.295 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:56.295 00:11:56.295 real 0m19.654s 00:11:56.295 user 1m16.286s 00:11:56.295 sys 0m1.588s 00:11:56.295 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.295 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.295 ************************************ 00:11:56.295 END TEST nvmf_filesystem_no_in_capsule 00:11:56.295 ************************************ 00:11:56.295 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:56.296 ************************************ 00:11:56.296 START TEST nvmf_filesystem_in_capsule 00:11:56.296 ************************************ 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2380391 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2380391 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2380391 ']' 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.296 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.296 [2024-12-07 11:23:55.262144] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:56.296 [2024-12-07 11:23:55.262274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.296 [2024-12-07 11:23:55.415239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.296 [2024-12-07 11:23:55.517293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.296 [2024-12-07 11:23:55.517339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.296 [2024-12-07 11:23:55.517351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.296 [2024-12-07 11:23:55.517363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.296 [2024-12-07 11:23:55.517373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.296 [2024-12-07 11:23:55.519615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.296 [2024-12-07 11:23:55.519700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.296 [2024-12-07 11:23:55.519813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.296 [2024-12-07 11:23:55.519837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.879 [2024-12-07 11:23:56.078780] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.879 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.451 Malloc1 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.451 [2024-12-07 11:23:56.525590] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.451 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:57.451 { 00:11:57.451 "name": "Malloc1", 00:11:57.451 "aliases": [ 00:11:57.452 "b89d5f6b-3384-4ff4-8bf7-da4d1e793205" 00:11:57.452 ], 00:11:57.452 "product_name": "Malloc disk", 00:11:57.452 "block_size": 512, 00:11:57.452 "num_blocks": 1048576, 00:11:57.452 "uuid": "b89d5f6b-3384-4ff4-8bf7-da4d1e793205", 00:11:57.452 "assigned_rate_limits": { 00:11:57.452 "rw_ios_per_sec": 0, 00:11:57.452 "rw_mbytes_per_sec": 0, 00:11:57.452 "r_mbytes_per_sec": 0, 00:11:57.452 "w_mbytes_per_sec": 0 00:11:57.452 }, 00:11:57.452 "claimed": true, 00:11:57.452 "claim_type": "exclusive_write", 00:11:57.452 "zoned": false, 00:11:57.452 "supported_io_types": { 00:11:57.452 "read": true, 00:11:57.452 "write": true, 00:11:57.452 "unmap": true, 00:11:57.452 "flush": true, 00:11:57.452 "reset": true, 00:11:57.452 "nvme_admin": false, 00:11:57.452 "nvme_io": false, 00:11:57.452 "nvme_io_md": false, 00:11:57.452 "write_zeroes": true, 00:11:57.452 "zcopy": true, 00:11:57.452 "get_zone_info": false, 00:11:57.452 "zone_management": false, 00:11:57.452 "zone_append": false, 00:11:57.452 "compare": false, 00:11:57.452 "compare_and_write": false, 00:11:57.452 "abort": true, 00:11:57.452 "seek_hole": false, 00:11:57.452 "seek_data": false, 00:11:57.452 "copy": true, 00:11:57.452 "nvme_iov_md": false 00:11:57.452 }, 00:11:57.452 "memory_domains": [ 00:11:57.452 { 00:11:57.452 "dma_device_id": "system", 00:11:57.452 "dma_device_type": 1 00:11:57.452 }, 00:11:57.452 { 00:11:57.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.452 "dma_device_type": 2 00:11:57.452 } 00:11:57.452 ], 00:11:57.452 "driver_specific": {} 00:11:57.452 } 00:11:57.452 ]' 00:11:57.452 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:57.452 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:57.452 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:57.452 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:57.452 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:57.452 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:57.452 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:57.452 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.836 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:58.837 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:58.837 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.837 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:58.837 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:01.384 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:01.644 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:02.585 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:02.585 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:02.585 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:02.585 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.585 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.846 ************************************ 00:12:02.846 START TEST filesystem_in_capsule_ext4 00:12:02.846 ************************************ 00:12:02.846 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:02.846 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:02.846 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.846 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:02.846 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:02.846 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:02.846 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:02.846 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:02.846 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:02.846 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:02.846 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:02.847 mke2fs 1.47.0 (5-Feb-2023) 00:12:02.847 Discarding device blocks: 0/522240 done 00:12:02.847 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:02.847 Filesystem UUID: a57b03f7-7887-443d-a067-bd5467dac3ea 00:12:02.847 Superblock backups stored on blocks: 00:12:02.847 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:02.847 00:12:02.847 Allocating group tables: 0/64 done 00:12:02.847 Writing inode tables: 0/64 done 00:12:03.107 Creating journal (8192 blocks): done 00:12:03.107 Writing superblocks and filesystem accounting information: 0/64 done 00:12:03.107 00:12:03.107 11:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:03.107 11:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2380391 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.692 00:12:09.692 real 0m5.909s 00:12:09.692 user 0m0.028s 00:12:09.692 sys 0m0.072s 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:09.692 ************************************ 00:12:09.692 END TEST filesystem_in_capsule_ext4 00:12:09.692 ************************************ 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.692 ************************************ 00:12:09.692 START TEST filesystem_in_capsule_btrfs 00:12:09.692 ************************************ 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:09.692 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:09.692 btrfs-progs v6.8.1 00:12:09.692 See https://btrfs.readthedocs.io for more information. 00:12:09.692 00:12:09.692 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:09.692 NOTE: several default settings have changed in version 5.15, please make sure 00:12:09.692 this does not affect your deployments: 00:12:09.692 - DUP for metadata (-m dup) 00:12:09.692 - enabled no-holes (-O no-holes) 00:12:09.692 - enabled free-space-tree (-R free-space-tree) 00:12:09.692 00:12:09.692 Label: (null) 00:12:09.692 UUID: 9666195f-74a2-4a4f-ad30-eb3b4714eee3 00:12:09.692 Node size: 16384 00:12:09.692 Sector size: 4096 (CPU page size: 4096) 00:12:09.692 Filesystem size: 510.00MiB 00:12:09.692 Block group profiles: 00:12:09.692 Data: single 8.00MiB 00:12:09.692 Metadata: DUP 32.00MiB 00:12:09.692 System: DUP 8.00MiB 00:12:09.692 SSD detected: yes 00:12:09.692 Zoned device: no 00:12:09.692 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:09.692 Checksum: crc32c 00:12:09.692 Number of devices: 1 00:12:09.692 Devices: 00:12:09.692 ID SIZE PATH 00:12:09.692 1 510.00MiB /dev/nvme0n1p1 00:12:09.692 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2380391 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.692 00:12:09.692 real 0m0.783s 00:12:09.692 user 0m0.023s 00:12:09.692 sys 0m0.129s 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:09.692 ************************************ 00:12:09.692 END TEST filesystem_in_capsule_btrfs 00:12:09.692 ************************************ 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.692 ************************************ 00:12:09.692 START TEST filesystem_in_capsule_xfs 00:12:09.692 ************************************ 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:09.692 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:09.693 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:09.693 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:09.693 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:09.693 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:09.693 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:09.693 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:09.693 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:09.693 11:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:09.693 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:09.693 = sectsz=512 attr=2, projid32bit=1 00:12:09.693 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:09.693 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:09.693 data = bsize=4096 blocks=130560, imaxpct=25 00:12:09.693 = sunit=0 swidth=0 blks 00:12:09.693 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:09.693 log =internal log bsize=4096 blocks=16384, version=2 00:12:09.693 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:09.693 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:10.634 Discarding blocks...Done. 00:12:10.635 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:10.635 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2380391 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.547 00:12:12.547 real 0m2.767s 00:12:12.547 user 0m0.030s 00:12:12.547 sys 0m0.075s 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:12.547 ************************************ 00:12:12.547 END TEST filesystem_in_capsule_xfs 00:12:12.547 ************************************ 00:12:12.547 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:12.807 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:12.807 11:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2380391 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2380391 ']' 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2380391 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2380391 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2380391' 00:12:13.069 killing process with pid 2380391 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2380391 00:12:13.069 11:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2380391 00:12:14.984 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:14.984 00:12:14.984 real 0m18.819s 00:12:14.984 user 1m12.931s 00:12:14.984 sys 0m1.597s 00:12:14.984 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.984 11:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.984 ************************************ 00:12:14.984 END TEST nvmf_filesystem_in_capsule 00:12:14.984 ************************************ 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.984 rmmod nvme_tcp 00:12:14.984 rmmod nvme_fabrics 00:12:14.984 rmmod nvme_keyring 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:14.984 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:14.985 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:14.985 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:14.985 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.985 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:14.985 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.985 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.985 11:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.897 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:16.897 00:12:16.897 real 0m48.732s 00:12:16.897 user 2m31.520s 00:12:16.897 sys 0m9.084s 00:12:16.897 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.897 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:16.897 ************************************ 00:12:16.897 END TEST nvmf_filesystem 00:12:16.897 ************************************ 00:12:16.897 11:24:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:16.897 11:24:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:16.897 11:24:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.897 11:24:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.174 ************************************ 00:12:17.174 START TEST nvmf_target_discovery 00:12:17.174 ************************************ 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:17.174 * Looking for test storage... 00:12:17.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:17.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.174 --rc genhtml_branch_coverage=1 00:12:17.174 --rc genhtml_function_coverage=1 00:12:17.174 --rc genhtml_legend=1 00:12:17.174 --rc geninfo_all_blocks=1 00:12:17.174 --rc geninfo_unexecuted_blocks=1 00:12:17.174 00:12:17.174 ' 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:17.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.174 --rc genhtml_branch_coverage=1 00:12:17.174 --rc genhtml_function_coverage=1 00:12:17.174 --rc genhtml_legend=1 00:12:17.174 --rc geninfo_all_blocks=1 00:12:17.174 --rc geninfo_unexecuted_blocks=1 00:12:17.174 00:12:17.174 ' 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:17.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.174 --rc genhtml_branch_coverage=1 00:12:17.174 --rc genhtml_function_coverage=1 00:12:17.174 --rc genhtml_legend=1 00:12:17.174 --rc geninfo_all_blocks=1 00:12:17.174 --rc geninfo_unexecuted_blocks=1 00:12:17.174 00:12:17.174 ' 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:17.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.174 --rc genhtml_branch_coverage=1 00:12:17.174 --rc genhtml_function_coverage=1 00:12:17.174 --rc genhtml_legend=1 00:12:17.174 --rc geninfo_all_blocks=1 00:12:17.174 --rc geninfo_unexecuted_blocks=1 00:12:17.174 00:12:17.174 ' 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.174 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:17.175 11:24:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.315 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:25.316 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:25.316 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:25.316 Found net devices under 0000:31:00.0: cvl_0_0 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:25.316 Found net devices under 0000:31:00.1: cvl_0_1 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.316 11:24:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:12:25.316 00:12:25.316 --- 10.0.0.2 ping statistics --- 00:12:25.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.316 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:12:25.316 00:12:25.316 --- 10.0.0.1 ping statistics --- 00:12:25.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.316 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:25.316 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:25.317 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:25.317 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:25.317 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.317 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.317 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2389176 00:12:25.317 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2389176 00:12:25.317 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.317 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2389176 ']' 00:12:25.317 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.317 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.317 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.317 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.317 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.317 [2024-12-07 11:24:24.176140] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:25.317 [2024-12-07 11:24:24.176251] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.317 [2024-12-07 11:24:24.313750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.317 [2024-12-07 11:24:24.415127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.317 [2024-12-07 11:24:24.415171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.317 [2024-12-07 11:24:24.415183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.317 [2024-12-07 11:24:24.415195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.317 [2024-12-07 11:24:24.415204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.317 [2024-12-07 11:24:24.417438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.317 [2024-12-07 11:24:24.417518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.317 [2024-12-07 11:24:24.417634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.317 [2024-12-07 11:24:24.417658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.887 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.887 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:25.887 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:25.887 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:25.887 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.887 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.887 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:25.887 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.887 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.887 [2024-12-07 11:24:24.988437] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.887 Null1 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.887 [2024-12-07 11:24:25.059693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.887 Null2 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.887 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 Null3 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 Null4 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.888 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:26.148 00:12:26.148 Discovery Log Number of Records 6, Generation counter 6 00:12:26.148 =====Discovery Log Entry 0====== 00:12:26.148 trtype: tcp 00:12:26.148 adrfam: ipv4 00:12:26.148 subtype: current discovery subsystem 00:12:26.148 treq: not required 00:12:26.148 portid: 0 00:12:26.148 trsvcid: 4420 00:12:26.148 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:26.148 traddr: 10.0.0.2 00:12:26.148 eflags: explicit discovery connections, duplicate discovery information 00:12:26.148 sectype: none 00:12:26.148 =====Discovery Log Entry 1====== 00:12:26.148 trtype: tcp 00:12:26.148 adrfam: ipv4 00:12:26.148 subtype: nvme subsystem 00:12:26.148 treq: not required 00:12:26.148 portid: 0 00:12:26.148 trsvcid: 4420 00:12:26.148 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:26.148 traddr: 10.0.0.2 00:12:26.148 eflags: none 00:12:26.148 sectype: none 00:12:26.148 =====Discovery Log Entry 2====== 00:12:26.148 trtype: tcp 00:12:26.148 adrfam: ipv4 00:12:26.148 subtype: nvme subsystem 00:12:26.148 treq: not required 00:12:26.148 portid: 0 00:12:26.148 trsvcid: 4420 00:12:26.148 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:26.148 traddr: 10.0.0.2 00:12:26.148 eflags: none 00:12:26.148 sectype: none 00:12:26.148 =====Discovery Log Entry 3====== 00:12:26.148 trtype: tcp 00:12:26.148 adrfam: ipv4 00:12:26.148 subtype: nvme subsystem 00:12:26.148 treq: not required 00:12:26.148 portid: 0 00:12:26.148 trsvcid: 4420 00:12:26.148 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:26.148 traddr: 10.0.0.2 00:12:26.148 eflags: none 00:12:26.148 sectype: none 00:12:26.148 =====Discovery Log Entry 4====== 00:12:26.148 trtype: tcp 00:12:26.148 adrfam: ipv4 00:12:26.148 subtype: nvme subsystem 00:12:26.148 treq: not required 00:12:26.148 portid: 0 00:12:26.148 trsvcid: 4420 00:12:26.148 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:26.148 traddr: 10.0.0.2 00:12:26.148 eflags: none 00:12:26.148 sectype: none 00:12:26.148 =====Discovery Log Entry 5====== 00:12:26.148 trtype: tcp 00:12:26.148 adrfam: ipv4 00:12:26.148 subtype: discovery subsystem referral 00:12:26.148 treq: not required 00:12:26.148 portid: 0 00:12:26.148 trsvcid: 4430 00:12:26.148 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:26.148 traddr: 10.0.0.2 00:12:26.148 eflags: none 00:12:26.148 sectype: none 00:12:26.148 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:26.148 Perform nvmf subsystem discovery via RPC 00:12:26.148 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:26.148 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.148 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.148 [ 00:12:26.148 { 00:12:26.148 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:26.148 "subtype": "Discovery", 00:12:26.148 "listen_addresses": [ 00:12:26.148 { 00:12:26.148 "trtype": "TCP", 00:12:26.148 "adrfam": "IPv4", 00:12:26.148 "traddr": "10.0.0.2", 00:12:26.148 "trsvcid": "4420" 00:12:26.148 } 00:12:26.148 ], 00:12:26.148 "allow_any_host": true, 00:12:26.148 "hosts": [] 00:12:26.148 }, 00:12:26.148 { 00:12:26.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:26.148 "subtype": "NVMe", 00:12:26.148 "listen_addresses": [ 00:12:26.148 { 00:12:26.148 "trtype": "TCP", 00:12:26.148 "adrfam": "IPv4", 00:12:26.148 "traddr": "10.0.0.2", 00:12:26.148 "trsvcid": "4420" 00:12:26.148 } 00:12:26.148 ], 00:12:26.148 "allow_any_host": true, 00:12:26.148 "hosts": [], 00:12:26.148 "serial_number": "SPDK00000000000001", 00:12:26.148 "model_number": "SPDK bdev Controller", 00:12:26.148 "max_namespaces": 32, 00:12:26.148 "min_cntlid": 1, 00:12:26.148 "max_cntlid": 65519, 00:12:26.148 "namespaces": [ 00:12:26.148 { 00:12:26.148 "nsid": 1, 00:12:26.148 "bdev_name": "Null1", 00:12:26.148 "name": "Null1", 00:12:26.148 "nguid": "65AE8CA76141472CA8C2D37AEA25CFF5", 00:12:26.148 "uuid": "65ae8ca7-6141-472c-a8c2-d37aea25cff5" 00:12:26.148 } 00:12:26.148 ] 00:12:26.148 }, 00:12:26.148 { 00:12:26.148 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:26.148 "subtype": "NVMe", 00:12:26.148 "listen_addresses": [ 00:12:26.148 { 00:12:26.148 "trtype": "TCP", 00:12:26.148 "adrfam": "IPv4", 00:12:26.148 "traddr": "10.0.0.2", 00:12:26.148 "trsvcid": "4420" 00:12:26.148 } 00:12:26.148 ], 00:12:26.148 "allow_any_host": true, 00:12:26.148 "hosts": [], 00:12:26.148 "serial_number": "SPDK00000000000002", 00:12:26.148 "model_number": "SPDK bdev Controller", 00:12:26.148 "max_namespaces": 32, 00:12:26.148 "min_cntlid": 1, 00:12:26.148 "max_cntlid": 65519, 00:12:26.148 "namespaces": [ 00:12:26.148 { 00:12:26.148 "nsid": 1, 00:12:26.148 "bdev_name": "Null2", 00:12:26.148 "name": "Null2", 00:12:26.148 "nguid": "59EB857FA30D4751BF81A759343BD360", 00:12:26.148 "uuid": "59eb857f-a30d-4751-bf81-a759343bd360" 00:12:26.148 } 00:12:26.148 ] 00:12:26.148 }, 00:12:26.148 { 00:12:26.148 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:26.148 "subtype": "NVMe", 00:12:26.148 "listen_addresses": [ 00:12:26.148 { 00:12:26.148 "trtype": "TCP", 00:12:26.148 "adrfam": "IPv4", 00:12:26.148 "traddr": "10.0.0.2", 00:12:26.148 "trsvcid": "4420" 00:12:26.148 } 00:12:26.148 ], 00:12:26.148 "allow_any_host": true, 00:12:26.148 "hosts": [], 00:12:26.148 "serial_number": "SPDK00000000000003", 00:12:26.148 "model_number": "SPDK bdev Controller", 00:12:26.148 "max_namespaces": 32, 00:12:26.148 "min_cntlid": 1, 00:12:26.148 "max_cntlid": 65519, 00:12:26.148 "namespaces": [ 00:12:26.148 { 00:12:26.148 "nsid": 1, 00:12:26.148 "bdev_name": "Null3", 00:12:26.148 "name": "Null3", 00:12:26.148 "nguid": "11FE87940770430C83ED173BE724943D", 00:12:26.148 "uuid": "11fe8794-0770-430c-83ed-173be724943d" 00:12:26.148 } 00:12:26.148 ] 00:12:26.148 }, 00:12:26.148 { 00:12:26.148 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:26.148 "subtype": "NVMe", 00:12:26.148 "listen_addresses": [ 00:12:26.148 { 00:12:26.148 "trtype": "TCP", 00:12:26.148 "adrfam": "IPv4", 00:12:26.148 "traddr": "10.0.0.2", 00:12:26.148 "trsvcid": "4420" 00:12:26.148 } 00:12:26.148 ], 00:12:26.148 "allow_any_host": true, 00:12:26.148 "hosts": [], 00:12:26.148 "serial_number": "SPDK00000000000004", 00:12:26.148 "model_number": "SPDK bdev Controller", 00:12:26.148 "max_namespaces": 32, 00:12:26.148 "min_cntlid": 1, 00:12:26.148 "max_cntlid": 65519, 00:12:26.148 "namespaces": [ 00:12:26.148 { 00:12:26.148 "nsid": 1, 00:12:26.148 "bdev_name": "Null4", 00:12:26.148 "name": "Null4", 00:12:26.148 "nguid": "F18E24E1192A4B189C38566BB77EE7DF", 00:12:26.148 "uuid": "f18e24e1-192a-4b18-9c38-566bb77ee7df" 00:12:26.148 } 00:12:26.148 ] 00:12:26.148 } 00:12:26.148 ] 00:12:26.148 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.148 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:26.148 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:26.148 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.148 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.148 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.408 rmmod nvme_tcp 00:12:26.408 rmmod nvme_fabrics 00:12:26.408 rmmod nvme_keyring 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2389176 ']' 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2389176 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2389176 ']' 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2389176 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.408 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2389176 00:12:26.669 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.669 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.669 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2389176' 00:12:26.669 killing process with pid 2389176 00:12:26.669 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2389176 00:12:26.669 11:24:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2389176 00:12:27.238 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:27.238 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:27.238 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:27.238 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:27.238 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:27.238 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:27.238 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:27.239 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:27.239 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:27.239 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.239 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.239 11:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:29.782 00:12:29.782 real 0m12.385s 00:12:29.782 user 0m10.198s 00:12:29.782 sys 0m6.210s 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.782 ************************************ 00:12:29.782 END TEST nvmf_target_discovery 00:12:29.782 ************************************ 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.782 ************************************ 00:12:29.782 START TEST nvmf_referrals 00:12:29.782 ************************************ 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:29.782 * Looking for test storage... 00:12:29.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:29.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.782 --rc genhtml_branch_coverage=1 00:12:29.782 --rc genhtml_function_coverage=1 00:12:29.782 --rc genhtml_legend=1 00:12:29.782 --rc geninfo_all_blocks=1 00:12:29.782 --rc geninfo_unexecuted_blocks=1 00:12:29.782 00:12:29.782 ' 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:29.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.782 --rc genhtml_branch_coverage=1 00:12:29.782 --rc genhtml_function_coverage=1 00:12:29.782 --rc genhtml_legend=1 00:12:29.782 --rc geninfo_all_blocks=1 00:12:29.782 --rc geninfo_unexecuted_blocks=1 00:12:29.782 00:12:29.782 ' 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:29.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.782 --rc genhtml_branch_coverage=1 00:12:29.782 --rc genhtml_function_coverage=1 00:12:29.782 --rc genhtml_legend=1 00:12:29.782 --rc geninfo_all_blocks=1 00:12:29.782 --rc geninfo_unexecuted_blocks=1 00:12:29.782 00:12:29.782 ' 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:29.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.782 --rc genhtml_branch_coverage=1 00:12:29.782 --rc genhtml_function_coverage=1 00:12:29.782 --rc genhtml_legend=1 00:12:29.782 --rc geninfo_all_blocks=1 00:12:29.782 --rc geninfo_unexecuted_blocks=1 00:12:29.782 00:12:29.782 ' 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.782 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:29.783 11:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:37.934 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:37.934 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:37.934 Found net devices under 0000:31:00.0: cvl_0_0 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:37.934 Found net devices under 0000:31:00.1: cvl_0_1 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:37.934 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:37.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:12:37.935 00:12:37.935 --- 10.0.0.2 ping statistics --- 00:12:37.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.935 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:12:37.935 00:12:37.935 --- 10.0.0.1 ping statistics --- 00:12:37.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.935 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2393937 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2393937 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2393937 ']' 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.935 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.935 [2024-12-07 11:24:36.661390] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:37.935 [2024-12-07 11:24:36.661496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.935 [2024-12-07 11:24:36.802036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.935 [2024-12-07 11:24:36.901663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.935 [2024-12-07 11:24:36.901708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.935 [2024-12-07 11:24:36.901720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.935 [2024-12-07 11:24:36.901731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.935 [2024-12-07 11:24:36.901740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.935 [2024-12-07 11:24:36.904069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.935 [2024-12-07 11:24:36.904132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.935 [2024-12-07 11:24:36.904247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.935 [2024-12-07 11:24:36.904268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.197 [2024-12-07 11:24:37.479451] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.197 [2024-12-07 11:24:37.507701] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.197 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:38.458 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:38.719 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:38.979 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:39.240 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:39.240 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:39.240 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:39.240 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:39.240 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:39.240 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.240 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:39.500 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:39.500 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:39.500 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:39.500 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:39.500 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.500 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:39.762 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:39.762 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:39.762 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:39.762 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:39.762 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:39.762 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:39.762 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:39.762 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:40.023 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:40.024 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:40.024 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:40.024 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:40.024 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.024 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.286 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.547 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:40.547 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:40.547 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.548 rmmod nvme_tcp 00:12:40.548 rmmod nvme_fabrics 00:12:40.548 rmmod nvme_keyring 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2393937 ']' 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2393937 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2393937 ']' 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2393937 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2393937 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2393937' 00:12:40.548 killing process with pid 2393937 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2393937 00:12:40.548 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2393937 00:12:41.488 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:41.488 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:41.488 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:41.488 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:41.488 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:41.488 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:41.488 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:41.488 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.488 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:41.488 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.489 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.489 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.492 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:43.493 00:12:43.493 real 0m14.016s 00:12:43.493 user 0m17.012s 00:12:43.493 sys 0m6.621s 00:12:43.493 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.493 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.493 ************************************ 00:12:43.493 END TEST nvmf_referrals 00:12:43.493 ************************************ 00:12:43.493 11:24:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:43.493 11:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:43.493 11:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.493 11:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.493 ************************************ 00:12:43.493 START TEST nvmf_connect_disconnect 00:12:43.493 ************************************ 00:12:43.493 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:43.754 * Looking for test storage... 00:12:43.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.754 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:43.754 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:43.754 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:43.754 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:43.754 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.754 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.754 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.754 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.754 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.754 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.755 --rc genhtml_branch_coverage=1 00:12:43.755 --rc genhtml_function_coverage=1 00:12:43.755 --rc genhtml_legend=1 00:12:43.755 --rc geninfo_all_blocks=1 00:12:43.755 --rc geninfo_unexecuted_blocks=1 00:12:43.755 00:12:43.755 ' 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.755 --rc genhtml_branch_coverage=1 00:12:43.755 --rc genhtml_function_coverage=1 00:12:43.755 --rc genhtml_legend=1 00:12:43.755 --rc geninfo_all_blocks=1 00:12:43.755 --rc geninfo_unexecuted_blocks=1 00:12:43.755 00:12:43.755 ' 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.755 --rc genhtml_branch_coverage=1 00:12:43.755 --rc genhtml_function_coverage=1 00:12:43.755 --rc genhtml_legend=1 00:12:43.755 --rc geninfo_all_blocks=1 00:12:43.755 --rc geninfo_unexecuted_blocks=1 00:12:43.755 00:12:43.755 ' 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.755 --rc genhtml_branch_coverage=1 00:12:43.755 --rc genhtml_function_coverage=1 00:12:43.755 --rc genhtml_legend=1 00:12:43.755 --rc geninfo_all_blocks=1 00:12:43.755 --rc geninfo_unexecuted_blocks=1 00:12:43.755 00:12:43.755 ' 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.755 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.756 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.756 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.756 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.756 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.756 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.756 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:43.756 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:43.756 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:43.756 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:51.897 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:51.897 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:51.897 Found net devices under 0000:31:00.0: cvl_0_0 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:51.897 Found net devices under 0000:31:00.1: cvl_0_1 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:51.897 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:51.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:12:51.898 00:12:51.898 --- 10.0.0.2 ping statistics --- 00:12:51.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.898 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:12:51.898 00:12:51.898 --- 10.0.0.1 ping statistics --- 00:12:51.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.898 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2399109 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2399109 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2399109 ']' 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.898 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.898 [2024-12-07 11:24:50.676458] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:51.898 [2024-12-07 11:24:50.676597] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.898 [2024-12-07 11:24:50.826576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.898 [2024-12-07 11:24:50.928057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.898 [2024-12-07 11:24:50.928101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.898 [2024-12-07 11:24:50.928113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.898 [2024-12-07 11:24:50.928124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.898 [2024-12-07 11:24:50.928133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.898 [2024-12-07 11:24:50.930409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.898 [2024-12-07 11:24:50.930491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.898 [2024-12-07 11:24:50.930606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.898 [2024-12-07 11:24:50.930630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.159 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.159 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:52.159 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:52.159 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:52.159 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.159 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.159 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:52.159 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.159 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.159 [2024-12-07 11:24:51.496965] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.421 [2024-12-07 11:24:51.607507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:52.421 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:54.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:48.309 rmmod nvme_tcp 00:16:48.309 rmmod nvme_fabrics 00:16:48.309 rmmod nvme_keyring 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2399109 ']' 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2399109 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2399109 ']' 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2399109 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2399109 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2399109' 00:16:48.309 killing process with pid 2399109 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2399109 00:16:48.309 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2399109 00:16:49.007 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:49.007 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:49.007 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:49.007 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:49.007 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:49.007 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:49.007 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:49.007 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:49.007 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:49.007 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.007 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.007 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.925 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:50.925 00:16:50.925 real 4m7.391s 00:16:50.925 user 15m35.955s 00:16:50.925 sys 0m29.660s 00:16:50.925 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.925 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:50.925 ************************************ 00:16:50.925 END TEST nvmf_connect_disconnect 00:16:50.925 ************************************ 00:16:50.925 11:28:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:50.925 11:28:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:50.925 11:28:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.925 11:28:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:51.185 ************************************ 00:16:51.185 START TEST nvmf_multitarget 00:16:51.185 ************************************ 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:51.185 * Looking for test storage... 00:16:51.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:51.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.185 --rc genhtml_branch_coverage=1 00:16:51.185 --rc genhtml_function_coverage=1 00:16:51.185 --rc genhtml_legend=1 00:16:51.185 --rc geninfo_all_blocks=1 00:16:51.185 --rc geninfo_unexecuted_blocks=1 00:16:51.185 00:16:51.185 ' 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:51.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.185 --rc genhtml_branch_coverage=1 00:16:51.185 --rc genhtml_function_coverage=1 00:16:51.185 --rc genhtml_legend=1 00:16:51.185 --rc geninfo_all_blocks=1 00:16:51.185 --rc geninfo_unexecuted_blocks=1 00:16:51.185 00:16:51.185 ' 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:51.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.185 --rc genhtml_branch_coverage=1 00:16:51.185 --rc genhtml_function_coverage=1 00:16:51.185 --rc genhtml_legend=1 00:16:51.185 --rc geninfo_all_blocks=1 00:16:51.185 --rc geninfo_unexecuted_blocks=1 00:16:51.185 00:16:51.185 ' 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:51.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.185 --rc genhtml_branch_coverage=1 00:16:51.185 --rc genhtml_function_coverage=1 00:16:51.185 --rc genhtml_legend=1 00:16:51.185 --rc geninfo_all_blocks=1 00:16:51.185 --rc geninfo_unexecuted_blocks=1 00:16:51.185 00:16:51.185 ' 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:51.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.185 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.445 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:51.445 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:51.445 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:51.445 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:59.581 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:59.581 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:59.581 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:59.581 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:59.581 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:59.581 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:59.581 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:59.581 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:59.581 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:59.581 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:59.582 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:59.582 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:59.582 Found net devices under 0000:31:00.0: cvl_0_0 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:59.582 Found net devices under 0000:31:00.1: cvl_0_1 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:59.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:16:59.582 00:16:59.582 --- 10.0.0.2 ping statistics --- 00:16:59.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.582 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:59.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:16:59.582 00:16:59.582 --- 10.0.0.1 ping statistics --- 00:16:59.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.582 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.582 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:59.583 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2451117 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2451117 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2451117 ']' 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:59.583 [2024-12-07 11:28:58.111274] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:59.583 [2024-12-07 11:28:58.111384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.583 [2024-12-07 11:28:58.244403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:59.583 [2024-12-07 11:28:58.346038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.583 [2024-12-07 11:28:58.346081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.583 [2024-12-07 11:28:58.346093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.583 [2024-12-07 11:28:58.346105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.583 [2024-12-07 11:28:58.346114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.583 [2024-12-07 11:28:58.348314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.583 [2024-12-07 11:28:58.348428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.583 [2024-12-07 11:28:58.348574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.583 [2024-12-07 11:28:58.348599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.583 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:59.844 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:59.844 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:59.844 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:59.844 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:59.844 "nvmf_tgt_1" 00:16:59.844 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:00.104 "nvmf_tgt_2" 00:17:00.104 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:00.104 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:00.104 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:00.104 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:00.104 true 00:17:00.104 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:00.365 true 00:17:00.365 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:00.365 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:00.365 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:00.365 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:00.365 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:00.365 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:00.365 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:00.365 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:00.365 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:00.365 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:00.365 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:00.365 rmmod nvme_tcp 00:17:00.365 rmmod nvme_fabrics 00:17:00.365 rmmod nvme_keyring 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2451117 ']' 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2451117 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2451117 ']' 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2451117 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2451117 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2451117' 00:17:00.625 killing process with pid 2451117 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2451117 00:17:00.625 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2451117 00:17:01.567 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.567 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:01.567 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:01.567 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:01.567 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:01.567 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:01.567 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:01.567 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.567 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:01.567 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.567 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.567 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.482 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:03.482 00:17:03.482 real 0m12.374s 00:17:03.482 user 0m11.479s 00:17:03.482 sys 0m6.186s 00:17:03.482 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.482 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:03.482 ************************************ 00:17:03.482 END TEST nvmf_multitarget 00:17:03.482 ************************************ 00:17:03.482 11:29:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:03.482 11:29:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:03.482 11:29:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.482 11:29:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.482 ************************************ 00:17:03.482 START TEST nvmf_rpc 00:17:03.482 ************************************ 00:17:03.482 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:03.744 * Looking for test storage... 00:17:03.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:03.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.744 --rc genhtml_branch_coverage=1 00:17:03.744 --rc genhtml_function_coverage=1 00:17:03.744 --rc genhtml_legend=1 00:17:03.744 --rc geninfo_all_blocks=1 00:17:03.744 --rc geninfo_unexecuted_blocks=1 00:17:03.744 00:17:03.744 ' 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:03.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.744 --rc genhtml_branch_coverage=1 00:17:03.744 --rc genhtml_function_coverage=1 00:17:03.744 --rc genhtml_legend=1 00:17:03.744 --rc geninfo_all_blocks=1 00:17:03.744 --rc geninfo_unexecuted_blocks=1 00:17:03.744 00:17:03.744 ' 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:03.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.744 --rc genhtml_branch_coverage=1 00:17:03.744 --rc genhtml_function_coverage=1 00:17:03.744 --rc genhtml_legend=1 00:17:03.744 --rc geninfo_all_blocks=1 00:17:03.744 --rc geninfo_unexecuted_blocks=1 00:17:03.744 00:17:03.744 ' 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:03.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.744 --rc genhtml_branch_coverage=1 00:17:03.744 --rc genhtml_function_coverage=1 00:17:03.744 --rc genhtml_legend=1 00:17:03.744 --rc geninfo_all_blocks=1 00:17:03.744 --rc geninfo_unexecuted_blocks=1 00:17:03.744 00:17:03.744 ' 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.744 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:03.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:03.745 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.878 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:11.879 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:11.879 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:11.879 Found net devices under 0000:31:00.0: cvl_0_0 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:11.879 Found net devices under 0000:31:00.1: cvl_0_1 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:11.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:17:11.879 00:17:11.879 --- 10.0.0.2 ping statistics --- 00:17:11.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.879 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:17:11.879 00:17:11.879 --- 10.0.0.1 ping statistics --- 00:17:11.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.879 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2455879 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2455879 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2455879 ']' 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.879 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.879 [2024-12-07 11:29:10.701607] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:11.879 [2024-12-07 11:29:10.701736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.879 [2024-12-07 11:29:10.864842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:11.880 [2024-12-07 11:29:10.966251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.880 [2024-12-07 11:29:10.966309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.880 [2024-12-07 11:29:10.966321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.880 [2024-12-07 11:29:10.966333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.880 [2024-12-07 11:29:10.966342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.880 [2024-12-07 11:29:10.968596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.880 [2024-12-07 11:29:10.968681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.880 [2024-12-07 11:29:10.968799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.880 [2024-12-07 11:29:10.968823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.138 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.138 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:12.138 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.138 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:12.138 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.399 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.399 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:12.399 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.399 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.399 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.399 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:12.399 "tick_rate": 2400000000, 00:17:12.399 "poll_groups": [ 00:17:12.399 { 00:17:12.399 "name": "nvmf_tgt_poll_group_000", 00:17:12.399 "admin_qpairs": 0, 00:17:12.399 "io_qpairs": 0, 00:17:12.399 "current_admin_qpairs": 0, 00:17:12.399 "current_io_qpairs": 0, 00:17:12.399 "pending_bdev_io": 0, 00:17:12.399 "completed_nvme_io": 0, 00:17:12.399 "transports": [] 00:17:12.399 }, 00:17:12.399 { 00:17:12.399 "name": "nvmf_tgt_poll_group_001", 00:17:12.399 "admin_qpairs": 0, 00:17:12.399 "io_qpairs": 0, 00:17:12.399 "current_admin_qpairs": 0, 00:17:12.399 "current_io_qpairs": 0, 00:17:12.399 "pending_bdev_io": 0, 00:17:12.399 "completed_nvme_io": 0, 00:17:12.399 "transports": [] 00:17:12.399 }, 00:17:12.399 { 00:17:12.399 "name": "nvmf_tgt_poll_group_002", 00:17:12.399 "admin_qpairs": 0, 00:17:12.399 "io_qpairs": 0, 00:17:12.399 "current_admin_qpairs": 0, 00:17:12.399 "current_io_qpairs": 0, 00:17:12.399 "pending_bdev_io": 0, 00:17:12.399 "completed_nvme_io": 0, 00:17:12.399 "transports": [] 00:17:12.399 }, 00:17:12.399 { 00:17:12.399 "name": "nvmf_tgt_poll_group_003", 00:17:12.399 "admin_qpairs": 0, 00:17:12.399 "io_qpairs": 0, 00:17:12.399 "current_admin_qpairs": 0, 00:17:12.399 "current_io_qpairs": 0, 00:17:12.399 "pending_bdev_io": 0, 00:17:12.399 "completed_nvme_io": 0, 00:17:12.399 "transports": [] 00:17:12.399 } 00:17:12.399 ] 00:17:12.399 }' 00:17:12.399 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.400 [2024-12-07 11:29:11.640439] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:12.400 "tick_rate": 2400000000, 00:17:12.400 "poll_groups": [ 00:17:12.400 { 00:17:12.400 "name": "nvmf_tgt_poll_group_000", 00:17:12.400 "admin_qpairs": 0, 00:17:12.400 "io_qpairs": 0, 00:17:12.400 "current_admin_qpairs": 0, 00:17:12.400 "current_io_qpairs": 0, 00:17:12.400 "pending_bdev_io": 0, 00:17:12.400 "completed_nvme_io": 0, 00:17:12.400 "transports": [ 00:17:12.400 { 00:17:12.400 "trtype": "TCP" 00:17:12.400 } 00:17:12.400 ] 00:17:12.400 }, 00:17:12.400 { 00:17:12.400 "name": "nvmf_tgt_poll_group_001", 00:17:12.400 "admin_qpairs": 0, 00:17:12.400 "io_qpairs": 0, 00:17:12.400 "current_admin_qpairs": 0, 00:17:12.400 "current_io_qpairs": 0, 00:17:12.400 "pending_bdev_io": 0, 00:17:12.400 "completed_nvme_io": 0, 00:17:12.400 "transports": [ 00:17:12.400 { 00:17:12.400 "trtype": "TCP" 00:17:12.400 } 00:17:12.400 ] 00:17:12.400 }, 00:17:12.400 { 00:17:12.400 "name": "nvmf_tgt_poll_group_002", 00:17:12.400 "admin_qpairs": 0, 00:17:12.400 "io_qpairs": 0, 00:17:12.400 "current_admin_qpairs": 0, 00:17:12.400 "current_io_qpairs": 0, 00:17:12.400 "pending_bdev_io": 0, 00:17:12.400 "completed_nvme_io": 0, 00:17:12.400 "transports": [ 00:17:12.400 { 00:17:12.400 "trtype": "TCP" 00:17:12.400 } 00:17:12.400 ] 00:17:12.400 }, 00:17:12.400 { 00:17:12.400 "name": "nvmf_tgt_poll_group_003", 00:17:12.400 "admin_qpairs": 0, 00:17:12.400 "io_qpairs": 0, 00:17:12.400 "current_admin_qpairs": 0, 00:17:12.400 "current_io_qpairs": 0, 00:17:12.400 "pending_bdev_io": 0, 00:17:12.400 "completed_nvme_io": 0, 00:17:12.400 "transports": [ 00:17:12.400 { 00:17:12.400 "trtype": "TCP" 00:17:12.400 } 00:17:12.400 ] 00:17:12.400 } 00:17:12.400 ] 00:17:12.400 }' 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:12.400 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.706 Malloc1 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.706 [2024-12-07 11:29:11.866092] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:12.706 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:12.707 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:12.707 [2024-12-07 11:29:11.903642] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:12.707 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:12.707 could not add new controller: failed to write to nvme-fabrics device 00:17:12.707 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:12.707 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.707 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.707 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.707 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:12.707 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.707 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.707 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.707 11:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.092 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:14.092 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:14.092 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.092 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:14.092 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:16.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.635 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.636 [2024-12-07 11:29:15.784486] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:16.636 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:16.636 could not add new controller: failed to write to nvme-fabrics device 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.636 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:18.018 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.018 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:18.018 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.018 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:18.018 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.562 [2024-12-07 11:29:19.654442] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:20.562 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.563 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.563 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.563 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:20.563 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.563 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.563 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.563 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:21.949 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:21.949 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:21.949 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:21.949 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:21.949 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:23.860 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:23.860 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:23.860 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.860 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:23.860 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.860 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:23.860 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:24.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.122 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:24.122 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:24.122 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:24.122 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.122 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:24.122 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.122 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:24.122 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:24.122 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.122 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.383 [2024-12-07 11:29:23.512051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.383 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:25.771 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:25.771 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:25.771 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:25.771 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:25.771 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.309 [2024-12-07 11:29:27.327950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.309 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:29.692 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:29.692 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:29.692 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:29.692 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:29.692 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:31.660 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:31.660 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:31.660 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.660 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:31.660 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.660 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:31.660 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.921 [2024-12-07 11:29:31.232878] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.921 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:33.832 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:33.832 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:33.832 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:33.832 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:33.832 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:35.746 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:35.746 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:35.746 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:35.746 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:35.746 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:35.746 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:35.746 11:29:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:35.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.746 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.007 [2024-12-07 11:29:35.100924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.007 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.007 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:36.007 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.007 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.007 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.007 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:36.007 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.007 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.007 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.007 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:37.390 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:37.390 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:37.390 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:37.390 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:37.390 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:39.304 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:39.304 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:39.304 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:39.304 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:39.304 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:39.304 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:39.304 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:39.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.566 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:39.566 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:39.566 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:39.566 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.566 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:39.566 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.566 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 [2024-12-07 11:29:38.965004] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 [2024-12-07 11:29:39.033178] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.835 [2024-12-07 11:29:39.101362] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.835 [2024-12-07 11:29:39.173599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.835 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.097 [2024-12-07 11:29:39.245830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.097 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:40.098 "tick_rate": 2400000000, 00:17:40.098 "poll_groups": [ 00:17:40.098 { 00:17:40.098 "name": "nvmf_tgt_poll_group_000", 00:17:40.098 "admin_qpairs": 0, 00:17:40.098 "io_qpairs": 224, 00:17:40.098 "current_admin_qpairs": 0, 00:17:40.098 "current_io_qpairs": 0, 00:17:40.098 "pending_bdev_io": 0, 00:17:40.098 "completed_nvme_io": 277, 00:17:40.098 "transports": [ 00:17:40.098 { 00:17:40.098 "trtype": "TCP" 00:17:40.098 } 00:17:40.098 ] 00:17:40.098 }, 00:17:40.098 { 00:17:40.098 "name": "nvmf_tgt_poll_group_001", 00:17:40.098 "admin_qpairs": 1, 00:17:40.098 "io_qpairs": 223, 00:17:40.098 "current_admin_qpairs": 0, 00:17:40.098 "current_io_qpairs": 0, 00:17:40.098 "pending_bdev_io": 0, 00:17:40.098 "completed_nvme_io": 518, 00:17:40.098 "transports": [ 00:17:40.098 { 00:17:40.098 "trtype": "TCP" 00:17:40.098 } 00:17:40.098 ] 00:17:40.098 }, 00:17:40.098 { 00:17:40.098 "name": "nvmf_tgt_poll_group_002", 00:17:40.098 "admin_qpairs": 6, 00:17:40.098 "io_qpairs": 218, 00:17:40.098 "current_admin_qpairs": 0, 00:17:40.098 "current_io_qpairs": 0, 00:17:40.098 "pending_bdev_io": 0, 00:17:40.098 "completed_nvme_io": 220, 00:17:40.098 "transports": [ 00:17:40.098 { 00:17:40.098 "trtype": "TCP" 00:17:40.098 } 00:17:40.098 ] 00:17:40.098 }, 00:17:40.098 { 00:17:40.098 "name": "nvmf_tgt_poll_group_003", 00:17:40.098 "admin_qpairs": 0, 00:17:40.098 "io_qpairs": 224, 00:17:40.098 "current_admin_qpairs": 0, 00:17:40.098 "current_io_qpairs": 0, 00:17:40.098 "pending_bdev_io": 0, 00:17:40.098 "completed_nvme_io": 224, 00:17:40.098 "transports": [ 00:17:40.098 { 00:17:40.098 "trtype": "TCP" 00:17:40.098 } 00:17:40.098 ] 00:17:40.098 } 00:17:40.098 ] 00:17:40.098 }' 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.098 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.098 rmmod nvme_tcp 00:17:40.098 rmmod nvme_fabrics 00:17:40.360 rmmod nvme_keyring 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2455879 ']' 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2455879 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2455879 ']' 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2455879 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2455879 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2455879' 00:17:40.360 killing process with pid 2455879 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2455879 00:17:40.360 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2455879 00:17:41.305 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:41.305 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:41.305 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:41.305 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:41.305 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:41.305 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:41.305 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:41.305 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:41.305 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:41.305 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.305 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.305 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:43.223 00:17:43.223 real 0m39.771s 00:17:43.223 user 1m58.850s 00:17:43.223 sys 0m8.292s 00:17:43.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.223 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.223 ************************************ 00:17:43.223 END TEST nvmf_rpc 00:17:43.223 ************************************ 00:17:43.223 11:29:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:43.223 11:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:43.223 11:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.223 11:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:43.485 ************************************ 00:17:43.485 START TEST nvmf_invalid 00:17:43.485 ************************************ 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:43.485 * Looking for test storage... 00:17:43.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:43.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.485 --rc genhtml_branch_coverage=1 00:17:43.485 --rc genhtml_function_coverage=1 00:17:43.485 --rc genhtml_legend=1 00:17:43.485 --rc geninfo_all_blocks=1 00:17:43.485 --rc geninfo_unexecuted_blocks=1 00:17:43.485 00:17:43.485 ' 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:43.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.485 --rc genhtml_branch_coverage=1 00:17:43.485 --rc genhtml_function_coverage=1 00:17:43.485 --rc genhtml_legend=1 00:17:43.485 --rc geninfo_all_blocks=1 00:17:43.485 --rc geninfo_unexecuted_blocks=1 00:17:43.485 00:17:43.485 ' 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:43.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.485 --rc genhtml_branch_coverage=1 00:17:43.485 --rc genhtml_function_coverage=1 00:17:43.485 --rc genhtml_legend=1 00:17:43.485 --rc geninfo_all_blocks=1 00:17:43.485 --rc geninfo_unexecuted_blocks=1 00:17:43.485 00:17:43.485 ' 00:17:43.485 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:43.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.486 --rc genhtml_branch_coverage=1 00:17:43.486 --rc genhtml_function_coverage=1 00:17:43.486 --rc genhtml_legend=1 00:17:43.486 --rc geninfo_all_blocks=1 00:17:43.486 --rc geninfo_unexecuted_blocks=1 00:17:43.486 00:17:43.486 ' 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.486 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.748 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:43.748 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:43.748 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:43.748 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.977 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:51.978 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:51.978 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:51.978 Found net devices under 0000:31:00.0: cvl_0_0 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:51.978 Found net devices under 0000:31:00.1: cvl_0_1 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:51.978 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:51.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:17:51.978 00:17:51.978 --- 10.0.0.2 ping statistics --- 00:17:51.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.978 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:17:51.978 00:17:51.978 --- 10.0.0.1 ping statistics --- 00:17:51.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.978 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2465842 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2465842 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2465842 ']' 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.978 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.979 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:51.979 [2024-12-07 11:29:50.282187] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:51.979 [2024-12-07 11:29:50.282318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.979 [2024-12-07 11:29:50.432174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.979 [2024-12-07 11:29:50.536632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.979 [2024-12-07 11:29:50.536676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.979 [2024-12-07 11:29:50.536687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.979 [2024-12-07 11:29:50.536699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.979 [2024-12-07 11:29:50.536709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.979 [2024-12-07 11:29:50.538974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.979 [2024-12-07 11:29:50.539078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.979 [2024-12-07 11:29:50.539428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.979 [2024-12-07 11:29:50.539445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.979 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.979 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:51.979 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:51.979 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.979 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:51.979 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.979 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:51.979 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28747 00:17:51.979 [2024-12-07 11:29:51.254876] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:51.979 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:51.979 { 00:17:51.979 "nqn": "nqn.2016-06.io.spdk:cnode28747", 00:17:51.979 "tgt_name": "foobar", 00:17:51.979 "method": "nvmf_create_subsystem", 00:17:51.979 "req_id": 1 00:17:51.979 } 00:17:51.979 Got JSON-RPC error response 00:17:51.979 response: 00:17:51.979 { 00:17:51.979 "code": -32603, 00:17:51.979 "message": "Unable to find target foobar" 00:17:51.979 }' 00:17:51.979 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:51.979 { 00:17:51.979 "nqn": "nqn.2016-06.io.spdk:cnode28747", 00:17:51.979 "tgt_name": "foobar", 00:17:51.979 "method": "nvmf_create_subsystem", 00:17:51.979 "req_id": 1 00:17:51.979 } 00:17:51.979 Got JSON-RPC error response 00:17:51.979 response: 00:17:51.979 { 00:17:51.979 "code": -32603, 00:17:51.979 "message": "Unable to find target foobar" 00:17:51.979 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:51.979 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:51.979 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode30611 00:17:52.259 [2024-12-07 11:29:51.447540] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30611: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:52.259 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:52.259 { 00:17:52.259 "nqn": "nqn.2016-06.io.spdk:cnode30611", 00:17:52.259 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:52.259 "method": "nvmf_create_subsystem", 00:17:52.259 "req_id": 1 00:17:52.259 } 00:17:52.259 Got JSON-RPC error response 00:17:52.259 response: 00:17:52.259 { 00:17:52.259 "code": -32602, 00:17:52.259 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:52.259 }' 00:17:52.259 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:52.259 { 00:17:52.259 "nqn": "nqn.2016-06.io.spdk:cnode30611", 00:17:52.259 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:52.259 "method": "nvmf_create_subsystem", 00:17:52.259 "req_id": 1 00:17:52.259 } 00:17:52.259 Got JSON-RPC error response 00:17:52.259 response: 00:17:52.259 { 00:17:52.259 "code": -32602, 00:17:52.259 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:52.259 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:52.259 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:52.259 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10662 00:17:52.521 [2024-12-07 11:29:51.640156] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10662: invalid model number 'SPDK_Controller' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:52.521 { 00:17:52.521 "nqn": "nqn.2016-06.io.spdk:cnode10662", 00:17:52.521 "model_number": "SPDK_Controller\u001f", 00:17:52.521 "method": "nvmf_create_subsystem", 00:17:52.521 "req_id": 1 00:17:52.521 } 00:17:52.521 Got JSON-RPC error response 00:17:52.521 response: 00:17:52.521 { 00:17:52.521 "code": -32602, 00:17:52.521 "message": "Invalid MN SPDK_Controller\u001f" 00:17:52.521 }' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:52.521 { 00:17:52.521 "nqn": "nqn.2016-06.io.spdk:cnode10662", 00:17:52.521 "model_number": "SPDK_Controller\u001f", 00:17:52.521 "method": "nvmf_create_subsystem", 00:17:52.521 "req_id": 1 00:17:52.521 } 00:17:52.521 Got JSON-RPC error response 00:17:52.521 response: 00:17:52.521 { 00:17:52.521 "code": -32602, 00:17:52.521 "message": "Invalid MN SPDK_Controller\u001f" 00:17:52.521 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.521 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '*nS!o]N6- 2Jk~@1$(`+' 00:17:52.522 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '*nS!o]N6- 2Jk~@1$(`+' nqn.2016-06.io.spdk:cnode615 00:17:52.783 [2024-12-07 11:29:51.993311] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode615: invalid serial number '*nS!o]N6- 2Jk~@1$(`+' 00:17:52.783 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:52.783 { 00:17:52.783 "nqn": "nqn.2016-06.io.spdk:cnode615", 00:17:52.783 "serial_number": "*\u007fnS!o]N6- 2Jk~@1$(`+", 00:17:52.783 "method": "nvmf_create_subsystem", 00:17:52.783 "req_id": 1 00:17:52.783 } 00:17:52.783 Got JSON-RPC error response 00:17:52.783 response: 00:17:52.783 { 00:17:52.783 "code": -32602, 00:17:52.783 "message": "Invalid SN *\u007fnS!o]N6- 2Jk~@1$(`+" 00:17:52.783 }' 00:17:52.783 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:52.783 { 00:17:52.783 "nqn": "nqn.2016-06.io.spdk:cnode615", 00:17:52.783 "serial_number": "*\u007fnS!o]N6- 2Jk~@1$(`+", 00:17:52.783 "method": "nvmf_create_subsystem", 00:17:52.783 "req_id": 1 00:17:52.783 } 00:17:52.783 Got JSON-RPC error response 00:17:52.783 response: 00:17:52.783 { 00:17:52.783 "code": -32602, 00:17:52.783 "message": "Invalid SN *\u007fnS!o]N6- 2Jk~@1$(`+" 00:17:52.783 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:52.783 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:52.783 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:52.783 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:52.783 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:52.784 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.046 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ G == \- ]] 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'GV^~IJy10=Y%oB9IYU".w)v'\''8boCAi' 00:17:53.047 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'GV^~IJy10=Y%oB9IYU".w)v'\''8boCAi' nqn.2016-06.io.spdk:cnode16205 00:17:53.307 [2024-12-07 11:29:52.494983] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16205: invalid model number 'GV^~IJy10=Y%oB9IYU".w)v'8boCAi' 00:17:53.307 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:53.307 { 00:17:53.307 "nqn": "nqn.2016-06.io.spdk:cnode16205", 00:17:53.307 "model_number": "GV^~IJy10=Y%oB9IYU\".w)v'\''8boCAi", 00:17:53.307 "method": "nvmf_create_subsystem", 00:17:53.307 "req_id": 1 00:17:53.307 } 00:17:53.307 Got JSON-RPC error response 00:17:53.307 response: 00:17:53.307 { 00:17:53.307 "code": -32602, 00:17:53.307 "message": "Invalid MN GV^~IJy10=Y%oB9IYU\".w)v'\''8boCAi" 00:17:53.307 }' 00:17:53.307 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:53.307 { 00:17:53.307 "nqn": "nqn.2016-06.io.spdk:cnode16205", 00:17:53.307 "model_number": "GV^~IJy10=Y%oB9IYU\".w)v'8boCAi", 00:17:53.307 "method": "nvmf_create_subsystem", 00:17:53.307 "req_id": 1 00:17:53.307 } 00:17:53.307 Got JSON-RPC error response 00:17:53.307 response: 00:17:53.307 { 00:17:53.307 "code": -32602, 00:17:53.307 "message": "Invalid MN GV^~IJy10=Y%oB9IYU\".w)v'8boCAi" 00:17:53.307 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:53.307 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:53.568 [2024-12-07 11:29:52.679678] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.568 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:53.568 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:53.568 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:53.568 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:53.568 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:53.568 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:53.828 [2024-12-07 11:29:53.068930] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:53.828 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:53.828 { 00:17:53.828 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:53.828 "listen_address": { 00:17:53.828 "trtype": "tcp", 00:17:53.828 "traddr": "", 00:17:53.828 "trsvcid": "4421" 00:17:53.828 }, 00:17:53.828 "method": "nvmf_subsystem_remove_listener", 00:17:53.828 "req_id": 1 00:17:53.828 } 00:17:53.828 Got JSON-RPC error response 00:17:53.828 response: 00:17:53.828 { 00:17:53.828 "code": -32602, 00:17:53.828 "message": "Invalid parameters" 00:17:53.828 }' 00:17:53.828 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:53.828 { 00:17:53.828 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:53.828 "listen_address": { 00:17:53.828 "trtype": "tcp", 00:17:53.828 "traddr": "", 00:17:53.828 "trsvcid": "4421" 00:17:53.828 }, 00:17:53.828 "method": "nvmf_subsystem_remove_listener", 00:17:53.828 "req_id": 1 00:17:53.828 } 00:17:53.828 Got JSON-RPC error response 00:17:53.828 response: 00:17:53.828 { 00:17:53.828 "code": -32602, 00:17:53.828 "message": "Invalid parameters" 00:17:53.828 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:53.828 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13375 -i 0 00:17:54.089 [2024-12-07 11:29:53.249460] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13375: invalid cntlid range [0-65519] 00:17:54.089 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:54.089 { 00:17:54.089 "nqn": "nqn.2016-06.io.spdk:cnode13375", 00:17:54.089 "min_cntlid": 0, 00:17:54.089 "method": "nvmf_create_subsystem", 00:17:54.089 "req_id": 1 00:17:54.089 } 00:17:54.089 Got JSON-RPC error response 00:17:54.089 response: 00:17:54.089 { 00:17:54.089 "code": -32602, 00:17:54.089 "message": "Invalid cntlid range [0-65519]" 00:17:54.089 }' 00:17:54.089 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:54.089 { 00:17:54.089 "nqn": "nqn.2016-06.io.spdk:cnode13375", 00:17:54.089 "min_cntlid": 0, 00:17:54.089 "method": "nvmf_create_subsystem", 00:17:54.089 "req_id": 1 00:17:54.089 } 00:17:54.089 Got JSON-RPC error response 00:17:54.089 response: 00:17:54.089 { 00:17:54.089 "code": -32602, 00:17:54.089 "message": "Invalid cntlid range [0-65519]" 00:17:54.089 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:54.089 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29769 -i 65520 00:17:54.089 [2024-12-07 11:29:53.430044] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29769: invalid cntlid range [65520-65519] 00:17:54.350 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:54.350 { 00:17:54.350 "nqn": "nqn.2016-06.io.spdk:cnode29769", 00:17:54.350 "min_cntlid": 65520, 00:17:54.350 "method": "nvmf_create_subsystem", 00:17:54.350 "req_id": 1 00:17:54.350 } 00:17:54.350 Got JSON-RPC error response 00:17:54.350 response: 00:17:54.350 { 00:17:54.350 "code": -32602, 00:17:54.350 "message": "Invalid cntlid range [65520-65519]" 00:17:54.350 }' 00:17:54.350 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:54.350 { 00:17:54.350 "nqn": "nqn.2016-06.io.spdk:cnode29769", 00:17:54.350 "min_cntlid": 65520, 00:17:54.350 "method": "nvmf_create_subsystem", 00:17:54.350 "req_id": 1 00:17:54.350 } 00:17:54.350 Got JSON-RPC error response 00:17:54.350 response: 00:17:54.350 { 00:17:54.350 "code": -32602, 00:17:54.350 "message": "Invalid cntlid range [65520-65519]" 00:17:54.350 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:54.350 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22078 -I 0 00:17:54.350 [2024-12-07 11:29:53.614679] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22078: invalid cntlid range [1-0] 00:17:54.350 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:54.350 { 00:17:54.350 "nqn": "nqn.2016-06.io.spdk:cnode22078", 00:17:54.350 "max_cntlid": 0, 00:17:54.350 "method": "nvmf_create_subsystem", 00:17:54.350 "req_id": 1 00:17:54.350 } 00:17:54.350 Got JSON-RPC error response 00:17:54.350 response: 00:17:54.350 { 00:17:54.350 "code": -32602, 00:17:54.350 "message": "Invalid cntlid range [1-0]" 00:17:54.350 }' 00:17:54.350 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:54.350 { 00:17:54.350 "nqn": "nqn.2016-06.io.spdk:cnode22078", 00:17:54.350 "max_cntlid": 0, 00:17:54.350 "method": "nvmf_create_subsystem", 00:17:54.350 "req_id": 1 00:17:54.350 } 00:17:54.350 Got JSON-RPC error response 00:17:54.350 response: 00:17:54.350 { 00:17:54.350 "code": -32602, 00:17:54.350 "message": "Invalid cntlid range [1-0]" 00:17:54.350 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:54.350 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18344 -I 65520 00:17:54.612 [2024-12-07 11:29:53.799285] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18344: invalid cntlid range [1-65520] 00:17:54.612 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:54.612 { 00:17:54.612 "nqn": "nqn.2016-06.io.spdk:cnode18344", 00:17:54.612 "max_cntlid": 65520, 00:17:54.612 "method": "nvmf_create_subsystem", 00:17:54.612 "req_id": 1 00:17:54.612 } 00:17:54.612 Got JSON-RPC error response 00:17:54.612 response: 00:17:54.612 { 00:17:54.612 "code": -32602, 00:17:54.612 "message": "Invalid cntlid range [1-65520]" 00:17:54.612 }' 00:17:54.612 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:54.612 { 00:17:54.612 "nqn": "nqn.2016-06.io.spdk:cnode18344", 00:17:54.612 "max_cntlid": 65520, 00:17:54.612 "method": "nvmf_create_subsystem", 00:17:54.612 "req_id": 1 00:17:54.612 } 00:17:54.612 Got JSON-RPC error response 00:17:54.612 response: 00:17:54.612 { 00:17:54.612 "code": -32602, 00:17:54.612 "message": "Invalid cntlid range [1-65520]" 00:17:54.612 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:54.612 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4569 -i 6 -I 5 00:17:54.873 [2024-12-07 11:29:53.987900] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4569: invalid cntlid range [6-5] 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:54.874 { 00:17:54.874 "nqn": "nqn.2016-06.io.spdk:cnode4569", 00:17:54.874 "min_cntlid": 6, 00:17:54.874 "max_cntlid": 5, 00:17:54.874 "method": "nvmf_create_subsystem", 00:17:54.874 "req_id": 1 00:17:54.874 } 00:17:54.874 Got JSON-RPC error response 00:17:54.874 response: 00:17:54.874 { 00:17:54.874 "code": -32602, 00:17:54.874 "message": "Invalid cntlid range [6-5]" 00:17:54.874 }' 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:54.874 { 00:17:54.874 "nqn": "nqn.2016-06.io.spdk:cnode4569", 00:17:54.874 "min_cntlid": 6, 00:17:54.874 "max_cntlid": 5, 00:17:54.874 "method": "nvmf_create_subsystem", 00:17:54.874 "req_id": 1 00:17:54.874 } 00:17:54.874 Got JSON-RPC error response 00:17:54.874 response: 00:17:54.874 { 00:17:54.874 "code": -32602, 00:17:54.874 "message": "Invalid cntlid range [6-5]" 00:17:54.874 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:54.874 { 00:17:54.874 "name": "foobar", 00:17:54.874 "method": "nvmf_delete_target", 00:17:54.874 "req_id": 1 00:17:54.874 } 00:17:54.874 Got JSON-RPC error response 00:17:54.874 response: 00:17:54.874 { 00:17:54.874 "code": -32602, 00:17:54.874 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:54.874 }' 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:54.874 { 00:17:54.874 "name": "foobar", 00:17:54.874 "method": "nvmf_delete_target", 00:17:54.874 "req_id": 1 00:17:54.874 } 00:17:54.874 Got JSON-RPC error response 00:17:54.874 response: 00:17:54.874 { 00:17:54.874 "code": -32602, 00:17:54.874 "message": "The specified target doesn't exist, cannot delete it." 00:17:54.874 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:54.874 rmmod nvme_tcp 00:17:54.874 rmmod nvme_fabrics 00:17:54.874 rmmod nvme_keyring 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2465842 ']' 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2465842 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2465842 ']' 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2465842 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.874 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2465842 00:17:55.136 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.136 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.136 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2465842' 00:17:55.136 killing process with pid 2465842 00:17:55.136 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2465842 00:17:55.136 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2465842 00:17:56.079 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:56.079 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:56.079 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:56.079 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:56.079 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:56.079 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:56.079 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:56.079 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:56.079 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:56.079 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.079 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:56.079 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.995 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:57.995 00:17:57.995 real 0m14.548s 00:17:57.995 user 0m22.283s 00:17:57.995 sys 0m6.490s 00:17:57.995 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.995 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:57.995 ************************************ 00:17:57.995 END TEST nvmf_invalid 00:17:57.995 ************************************ 00:17:57.995 11:29:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:57.995 11:29:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:57.995 11:29:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.995 11:29:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:57.995 ************************************ 00:17:57.995 START TEST nvmf_connect_stress 00:17:57.995 ************************************ 00:17:57.995 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:57.995 * Looking for test storage... 00:17:57.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.995 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:57.995 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:57.995 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:58.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.257 --rc genhtml_branch_coverage=1 00:17:58.257 --rc genhtml_function_coverage=1 00:17:58.257 --rc genhtml_legend=1 00:17:58.257 --rc geninfo_all_blocks=1 00:17:58.257 --rc geninfo_unexecuted_blocks=1 00:17:58.257 00:17:58.257 ' 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:58.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.257 --rc genhtml_branch_coverage=1 00:17:58.257 --rc genhtml_function_coverage=1 00:17:58.257 --rc genhtml_legend=1 00:17:58.257 --rc geninfo_all_blocks=1 00:17:58.257 --rc geninfo_unexecuted_blocks=1 00:17:58.257 00:17:58.257 ' 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:58.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.257 --rc genhtml_branch_coverage=1 00:17:58.257 --rc genhtml_function_coverage=1 00:17:58.257 --rc genhtml_legend=1 00:17:58.257 --rc geninfo_all_blocks=1 00:17:58.257 --rc geninfo_unexecuted_blocks=1 00:17:58.257 00:17:58.257 ' 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:58.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.257 --rc genhtml_branch_coverage=1 00:17:58.257 --rc genhtml_function_coverage=1 00:17:58.257 --rc genhtml_legend=1 00:17:58.257 --rc geninfo_all_blocks=1 00:17:58.257 --rc geninfo_unexecuted_blocks=1 00:17:58.257 00:17:58.257 ' 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.257 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:58.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:58.258 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:06.400 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:06.400 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:06.400 Found net devices under 0000:31:00.0: cvl_0_0 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:06.400 Found net devices under 0000:31:00.1: cvl_0_1 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:06.400 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:06.401 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:06.401 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:06.401 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:06.401 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:06.401 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:06.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:18:06.401 00:18:06.401 --- 10.0.0.2 ping statistics --- 00:18:06.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.401 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:06.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:18:06.401 00:18:06.401 --- 10.0.0.1 ping statistics --- 00:18:06.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.401 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2471483 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2471483 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2471483 ']' 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.401 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.401 [2024-12-07 11:30:05.246185] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:06.401 [2024-12-07 11:30:05.246313] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.401 [2024-12-07 11:30:05.411743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:06.401 [2024-12-07 11:30:05.535491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.401 [2024-12-07 11:30:05.535553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.401 [2024-12-07 11:30:05.535565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.401 [2024-12-07 11:30:05.535578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.401 [2024-12-07 11:30:05.535589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.401 [2024-12-07 11:30:05.538169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.401 [2024-12-07 11:30:05.538467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.401 [2024-12-07 11:30:05.538485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.973 [2024-12-07 11:30:06.064330] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.973 [2024-12-07 11:30:06.090228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.973 NULL1 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2471562 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.973 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.234 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.234 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:07.234 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:07.234 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.234 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.804 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.804 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:07.804 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:07.804 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.804 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.065 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.065 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:08.065 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.065 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.065 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.326 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.326 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:08.326 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.326 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.326 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.586 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.586 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:08.586 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.586 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.586 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.848 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.848 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:08.848 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.848 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.848 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.422 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.422 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:09.422 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.422 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.422 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.700 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.700 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:09.700 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.700 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.700 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.960 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.960 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:09.960 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.960 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.960 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.220 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.220 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:10.220 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.220 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.220 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.481 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.481 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:10.481 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.481 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.481 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.053 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.053 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:11.054 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.054 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.054 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.315 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.315 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:11.315 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.315 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.315 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.576 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.576 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:11.576 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.576 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.576 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.837 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.837 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:11.837 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.837 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.837 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.096 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.096 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:12.096 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.096 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.096 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.667 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.667 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:12.667 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.667 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.667 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.928 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.928 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:12.928 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.928 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.928 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.188 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.188 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:13.188 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.188 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.188 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.449 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.449 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:13.449 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.449 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.449 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.710 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.710 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:13.710 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.710 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.710 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.282 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.282 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:14.282 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.282 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.282 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.544 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.544 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:14.544 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.544 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.544 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.805 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.805 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:14.805 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.805 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.805 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.066 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.066 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:15.066 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.066 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.066 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.638 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.638 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:15.638 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.638 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.638 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.899 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.899 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:15.899 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.899 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.899 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.158 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.158 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:16.158 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.158 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.158 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.417 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.417 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:16.417 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.417 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.417 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.676 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.676 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:16.676 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.676 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.676 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.245 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.245 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:17.245 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.245 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.245 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.245 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2471562 00:18:17.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2471562) - No such process 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2471562 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:17.505 rmmod nvme_tcp 00:18:17.505 rmmod nvme_fabrics 00:18:17.505 rmmod nvme_keyring 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2471483 ']' 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2471483 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2471483 ']' 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2471483 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471483 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471483' 00:18:17.505 killing process with pid 2471483 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2471483 00:18:17.505 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2471483 00:18:18.076 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:18.076 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:18.076 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:18.076 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:18.076 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:18.076 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:18.076 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:18.336 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:18.336 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:18.336 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.336 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.336 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.251 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:20.251 00:18:20.251 real 0m22.264s 00:18:20.251 user 0m44.459s 00:18:20.251 sys 0m9.191s 00:18:20.251 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.251 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:20.251 ************************************ 00:18:20.251 END TEST nvmf_connect_stress 00:18:20.251 ************************************ 00:18:20.251 11:30:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:20.251 11:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:20.251 11:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.251 11:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:20.251 ************************************ 00:18:20.251 START TEST nvmf_fused_ordering 00:18:20.251 ************************************ 00:18:20.251 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:20.513 * Looking for test storage... 00:18:20.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:20.513 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:20.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.514 --rc genhtml_branch_coverage=1 00:18:20.514 --rc genhtml_function_coverage=1 00:18:20.514 --rc genhtml_legend=1 00:18:20.514 --rc geninfo_all_blocks=1 00:18:20.514 --rc geninfo_unexecuted_blocks=1 00:18:20.514 00:18:20.514 ' 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:20.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.514 --rc genhtml_branch_coverage=1 00:18:20.514 --rc genhtml_function_coverage=1 00:18:20.514 --rc genhtml_legend=1 00:18:20.514 --rc geninfo_all_blocks=1 00:18:20.514 --rc geninfo_unexecuted_blocks=1 00:18:20.514 00:18:20.514 ' 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:20.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.514 --rc genhtml_branch_coverage=1 00:18:20.514 --rc genhtml_function_coverage=1 00:18:20.514 --rc genhtml_legend=1 00:18:20.514 --rc geninfo_all_blocks=1 00:18:20.514 --rc geninfo_unexecuted_blocks=1 00:18:20.514 00:18:20.514 ' 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:20.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.514 --rc genhtml_branch_coverage=1 00:18:20.514 --rc genhtml_function_coverage=1 00:18:20.514 --rc genhtml_legend=1 00:18:20.514 --rc geninfo_all_blocks=1 00:18:20.514 --rc geninfo_unexecuted_blocks=1 00:18:20.514 00:18:20.514 ' 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:20.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:20.514 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:28.663 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:28.663 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:28.663 Found net devices under 0000:31:00.0: cvl_0_0 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:28.663 Found net devices under 0000:31:00.1: cvl_0_1 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:28.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:18:28.663 00:18:28.663 --- 10.0.0.2 ping statistics --- 00:18:28.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.663 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:18:28.663 00:18:28.663 --- 10.0.0.1 ping statistics --- 00:18:28.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.663 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.663 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:28.664 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:28.664 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.664 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:28.664 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:28.664 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.664 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:28.664 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2478397 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2478397 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2478397 ']' 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.664 [2024-12-07 11:30:27.105520] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:28.664 [2024-12-07 11:30:27.105636] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.664 [2024-12-07 11:30:27.268847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.664 [2024-12-07 11:30:27.386459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.664 [2024-12-07 11:30:27.386521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.664 [2024-12-07 11:30:27.386534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.664 [2024-12-07 11:30:27.386546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.664 [2024-12-07 11:30:27.386563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.664 [2024-12-07 11:30:27.387957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.664 [2024-12-07 11:30:27.920183] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.664 [2024-12-07 11:30:27.936508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.664 NULL1 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.664 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:28.923 [2024-12-07 11:30:28.016241] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:28.923 [2024-12-07 11:30:28.016326] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478739 ] 00:18:29.183 Attached to nqn.2016-06.io.spdk:cnode1 00:18:29.183 Namespace ID: 1 size: 1GB 00:18:29.183 fused_ordering(0) 00:18:29.183 fused_ordering(1) 00:18:29.183 fused_ordering(2) 00:18:29.183 fused_ordering(3) 00:18:29.183 fused_ordering(4) 00:18:29.183 fused_ordering(5) 00:18:29.183 fused_ordering(6) 00:18:29.183 fused_ordering(7) 00:18:29.183 fused_ordering(8) 00:18:29.183 fused_ordering(9) 00:18:29.183 fused_ordering(10) 00:18:29.183 fused_ordering(11) 00:18:29.183 fused_ordering(12) 00:18:29.183 fused_ordering(13) 00:18:29.183 fused_ordering(14) 00:18:29.183 fused_ordering(15) 00:18:29.183 fused_ordering(16) 00:18:29.183 fused_ordering(17) 00:18:29.183 fused_ordering(18) 00:18:29.183 fused_ordering(19) 00:18:29.183 fused_ordering(20) 00:18:29.183 fused_ordering(21) 00:18:29.183 fused_ordering(22) 00:18:29.183 fused_ordering(23) 00:18:29.183 fused_ordering(24) 00:18:29.183 fused_ordering(25) 00:18:29.183 fused_ordering(26) 00:18:29.183 fused_ordering(27) 00:18:29.183 fused_ordering(28) 00:18:29.183 fused_ordering(29) 00:18:29.183 fused_ordering(30) 00:18:29.183 fused_ordering(31) 00:18:29.183 fused_ordering(32) 00:18:29.183 fused_ordering(33) 00:18:29.183 fused_ordering(34) 00:18:29.183 fused_ordering(35) 00:18:29.183 fused_ordering(36) 00:18:29.183 fused_ordering(37) 00:18:29.183 fused_ordering(38) 00:18:29.183 fused_ordering(39) 00:18:29.183 fused_ordering(40) 00:18:29.183 fused_ordering(41) 00:18:29.183 fused_ordering(42) 00:18:29.183 fused_ordering(43) 00:18:29.183 fused_ordering(44) 00:18:29.183 fused_ordering(45) 00:18:29.183 fused_ordering(46) 00:18:29.183 fused_ordering(47) 00:18:29.183 fused_ordering(48) 00:18:29.183 fused_ordering(49) 00:18:29.183 fused_ordering(50) 00:18:29.183 fused_ordering(51) 00:18:29.183 fused_ordering(52) 00:18:29.183 fused_ordering(53) 00:18:29.183 fused_ordering(54) 00:18:29.183 fused_ordering(55) 00:18:29.183 fused_ordering(56) 00:18:29.183 fused_ordering(57) 00:18:29.183 fused_ordering(58) 00:18:29.183 fused_ordering(59) 00:18:29.183 fused_ordering(60) 00:18:29.183 fused_ordering(61) 00:18:29.183 fused_ordering(62) 00:18:29.183 fused_ordering(63) 00:18:29.183 fused_ordering(64) 00:18:29.183 fused_ordering(65) 00:18:29.183 fused_ordering(66) 00:18:29.183 fused_ordering(67) 00:18:29.183 fused_ordering(68) 00:18:29.183 fused_ordering(69) 00:18:29.183 fused_ordering(70) 00:18:29.183 fused_ordering(71) 00:18:29.183 fused_ordering(72) 00:18:29.183 fused_ordering(73) 00:18:29.183 fused_ordering(74) 00:18:29.183 fused_ordering(75) 00:18:29.183 fused_ordering(76) 00:18:29.183 fused_ordering(77) 00:18:29.183 fused_ordering(78) 00:18:29.183 fused_ordering(79) 00:18:29.183 fused_ordering(80) 00:18:29.183 fused_ordering(81) 00:18:29.183 fused_ordering(82) 00:18:29.183 fused_ordering(83) 00:18:29.183 fused_ordering(84) 00:18:29.183 fused_ordering(85) 00:18:29.183 fused_ordering(86) 00:18:29.183 fused_ordering(87) 00:18:29.183 fused_ordering(88) 00:18:29.183 fused_ordering(89) 00:18:29.183 fused_ordering(90) 00:18:29.183 fused_ordering(91) 00:18:29.183 fused_ordering(92) 00:18:29.183 fused_ordering(93) 00:18:29.183 fused_ordering(94) 00:18:29.183 fused_ordering(95) 00:18:29.183 fused_ordering(96) 00:18:29.183 fused_ordering(97) 00:18:29.183 fused_ordering(98) 00:18:29.183 fused_ordering(99) 00:18:29.183 fused_ordering(100) 00:18:29.183 fused_ordering(101) 00:18:29.183 fused_ordering(102) 00:18:29.183 fused_ordering(103) 00:18:29.183 fused_ordering(104) 00:18:29.183 fused_ordering(105) 00:18:29.183 fused_ordering(106) 00:18:29.183 fused_ordering(107) 00:18:29.183 fused_ordering(108) 00:18:29.183 fused_ordering(109) 00:18:29.183 fused_ordering(110) 00:18:29.183 fused_ordering(111) 00:18:29.183 fused_ordering(112) 00:18:29.184 fused_ordering(113) 00:18:29.184 fused_ordering(114) 00:18:29.184 fused_ordering(115) 00:18:29.184 fused_ordering(116) 00:18:29.184 fused_ordering(117) 00:18:29.184 fused_ordering(118) 00:18:29.184 fused_ordering(119) 00:18:29.184 fused_ordering(120) 00:18:29.184 fused_ordering(121) 00:18:29.184 fused_ordering(122) 00:18:29.184 fused_ordering(123) 00:18:29.184 fused_ordering(124) 00:18:29.184 fused_ordering(125) 00:18:29.184 fused_ordering(126) 00:18:29.184 fused_ordering(127) 00:18:29.184 fused_ordering(128) 00:18:29.184 fused_ordering(129) 00:18:29.184 fused_ordering(130) 00:18:29.184 fused_ordering(131) 00:18:29.184 fused_ordering(132) 00:18:29.184 fused_ordering(133) 00:18:29.184 fused_ordering(134) 00:18:29.184 fused_ordering(135) 00:18:29.184 fused_ordering(136) 00:18:29.184 fused_ordering(137) 00:18:29.184 fused_ordering(138) 00:18:29.184 fused_ordering(139) 00:18:29.184 fused_ordering(140) 00:18:29.184 fused_ordering(141) 00:18:29.184 fused_ordering(142) 00:18:29.184 fused_ordering(143) 00:18:29.184 fused_ordering(144) 00:18:29.184 fused_ordering(145) 00:18:29.184 fused_ordering(146) 00:18:29.184 fused_ordering(147) 00:18:29.184 fused_ordering(148) 00:18:29.184 fused_ordering(149) 00:18:29.184 fused_ordering(150) 00:18:29.184 fused_ordering(151) 00:18:29.184 fused_ordering(152) 00:18:29.184 fused_ordering(153) 00:18:29.184 fused_ordering(154) 00:18:29.184 fused_ordering(155) 00:18:29.184 fused_ordering(156) 00:18:29.184 fused_ordering(157) 00:18:29.184 fused_ordering(158) 00:18:29.184 fused_ordering(159) 00:18:29.184 fused_ordering(160) 00:18:29.184 fused_ordering(161) 00:18:29.184 fused_ordering(162) 00:18:29.184 fused_ordering(163) 00:18:29.184 fused_ordering(164) 00:18:29.184 fused_ordering(165) 00:18:29.184 fused_ordering(166) 00:18:29.184 fused_ordering(167) 00:18:29.184 fused_ordering(168) 00:18:29.184 fused_ordering(169) 00:18:29.184 fused_ordering(170) 00:18:29.184 fused_ordering(171) 00:18:29.184 fused_ordering(172) 00:18:29.184 fused_ordering(173) 00:18:29.184 fused_ordering(174) 00:18:29.184 fused_ordering(175) 00:18:29.184 fused_ordering(176) 00:18:29.184 fused_ordering(177) 00:18:29.184 fused_ordering(178) 00:18:29.184 fused_ordering(179) 00:18:29.184 fused_ordering(180) 00:18:29.184 fused_ordering(181) 00:18:29.184 fused_ordering(182) 00:18:29.184 fused_ordering(183) 00:18:29.184 fused_ordering(184) 00:18:29.184 fused_ordering(185) 00:18:29.184 fused_ordering(186) 00:18:29.184 fused_ordering(187) 00:18:29.184 fused_ordering(188) 00:18:29.184 fused_ordering(189) 00:18:29.184 fused_ordering(190) 00:18:29.184 fused_ordering(191) 00:18:29.184 fused_ordering(192) 00:18:29.184 fused_ordering(193) 00:18:29.184 fused_ordering(194) 00:18:29.184 fused_ordering(195) 00:18:29.184 fused_ordering(196) 00:18:29.184 fused_ordering(197) 00:18:29.184 fused_ordering(198) 00:18:29.184 fused_ordering(199) 00:18:29.184 fused_ordering(200) 00:18:29.184 fused_ordering(201) 00:18:29.184 fused_ordering(202) 00:18:29.184 fused_ordering(203) 00:18:29.184 fused_ordering(204) 00:18:29.184 fused_ordering(205) 00:18:29.754 fused_ordering(206) 00:18:29.754 fused_ordering(207) 00:18:29.754 fused_ordering(208) 00:18:29.754 fused_ordering(209) 00:18:29.754 fused_ordering(210) 00:18:29.754 fused_ordering(211) 00:18:29.754 fused_ordering(212) 00:18:29.754 fused_ordering(213) 00:18:29.754 fused_ordering(214) 00:18:29.754 fused_ordering(215) 00:18:29.754 fused_ordering(216) 00:18:29.754 fused_ordering(217) 00:18:29.754 fused_ordering(218) 00:18:29.754 fused_ordering(219) 00:18:29.754 fused_ordering(220) 00:18:29.754 fused_ordering(221) 00:18:29.754 fused_ordering(222) 00:18:29.754 fused_ordering(223) 00:18:29.754 fused_ordering(224) 00:18:29.754 fused_ordering(225) 00:18:29.754 fused_ordering(226) 00:18:29.754 fused_ordering(227) 00:18:29.754 fused_ordering(228) 00:18:29.754 fused_ordering(229) 00:18:29.754 fused_ordering(230) 00:18:29.754 fused_ordering(231) 00:18:29.754 fused_ordering(232) 00:18:29.754 fused_ordering(233) 00:18:29.754 fused_ordering(234) 00:18:29.754 fused_ordering(235) 00:18:29.754 fused_ordering(236) 00:18:29.754 fused_ordering(237) 00:18:29.754 fused_ordering(238) 00:18:29.754 fused_ordering(239) 00:18:29.754 fused_ordering(240) 00:18:29.754 fused_ordering(241) 00:18:29.754 fused_ordering(242) 00:18:29.754 fused_ordering(243) 00:18:29.754 fused_ordering(244) 00:18:29.754 fused_ordering(245) 00:18:29.754 fused_ordering(246) 00:18:29.754 fused_ordering(247) 00:18:29.754 fused_ordering(248) 00:18:29.754 fused_ordering(249) 00:18:29.754 fused_ordering(250) 00:18:29.754 fused_ordering(251) 00:18:29.754 fused_ordering(252) 00:18:29.754 fused_ordering(253) 00:18:29.754 fused_ordering(254) 00:18:29.754 fused_ordering(255) 00:18:29.754 fused_ordering(256) 00:18:29.754 fused_ordering(257) 00:18:29.754 fused_ordering(258) 00:18:29.754 fused_ordering(259) 00:18:29.754 fused_ordering(260) 00:18:29.754 fused_ordering(261) 00:18:29.754 fused_ordering(262) 00:18:29.754 fused_ordering(263) 00:18:29.754 fused_ordering(264) 00:18:29.754 fused_ordering(265) 00:18:29.754 fused_ordering(266) 00:18:29.754 fused_ordering(267) 00:18:29.754 fused_ordering(268) 00:18:29.754 fused_ordering(269) 00:18:29.754 fused_ordering(270) 00:18:29.754 fused_ordering(271) 00:18:29.754 fused_ordering(272) 00:18:29.754 fused_ordering(273) 00:18:29.754 fused_ordering(274) 00:18:29.754 fused_ordering(275) 00:18:29.754 fused_ordering(276) 00:18:29.754 fused_ordering(277) 00:18:29.754 fused_ordering(278) 00:18:29.754 fused_ordering(279) 00:18:29.754 fused_ordering(280) 00:18:29.754 fused_ordering(281) 00:18:29.754 fused_ordering(282) 00:18:29.754 fused_ordering(283) 00:18:29.754 fused_ordering(284) 00:18:29.754 fused_ordering(285) 00:18:29.754 fused_ordering(286) 00:18:29.754 fused_ordering(287) 00:18:29.754 fused_ordering(288) 00:18:29.754 fused_ordering(289) 00:18:29.754 fused_ordering(290) 00:18:29.754 fused_ordering(291) 00:18:29.754 fused_ordering(292) 00:18:29.754 fused_ordering(293) 00:18:29.754 fused_ordering(294) 00:18:29.754 fused_ordering(295) 00:18:29.754 fused_ordering(296) 00:18:29.754 fused_ordering(297) 00:18:29.754 fused_ordering(298) 00:18:29.754 fused_ordering(299) 00:18:29.754 fused_ordering(300) 00:18:29.754 fused_ordering(301) 00:18:29.754 fused_ordering(302) 00:18:29.754 fused_ordering(303) 00:18:29.754 fused_ordering(304) 00:18:29.754 fused_ordering(305) 00:18:29.754 fused_ordering(306) 00:18:29.754 fused_ordering(307) 00:18:29.754 fused_ordering(308) 00:18:29.754 fused_ordering(309) 00:18:29.754 fused_ordering(310) 00:18:29.754 fused_ordering(311) 00:18:29.754 fused_ordering(312) 00:18:29.754 fused_ordering(313) 00:18:29.754 fused_ordering(314) 00:18:29.754 fused_ordering(315) 00:18:29.754 fused_ordering(316) 00:18:29.754 fused_ordering(317) 00:18:29.754 fused_ordering(318) 00:18:29.754 fused_ordering(319) 00:18:29.754 fused_ordering(320) 00:18:29.754 fused_ordering(321) 00:18:29.754 fused_ordering(322) 00:18:29.754 fused_ordering(323) 00:18:29.754 fused_ordering(324) 00:18:29.754 fused_ordering(325) 00:18:29.754 fused_ordering(326) 00:18:29.754 fused_ordering(327) 00:18:29.754 fused_ordering(328) 00:18:29.754 fused_ordering(329) 00:18:29.754 fused_ordering(330) 00:18:29.754 fused_ordering(331) 00:18:29.754 fused_ordering(332) 00:18:29.754 fused_ordering(333) 00:18:29.754 fused_ordering(334) 00:18:29.754 fused_ordering(335) 00:18:29.754 fused_ordering(336) 00:18:29.754 fused_ordering(337) 00:18:29.754 fused_ordering(338) 00:18:29.754 fused_ordering(339) 00:18:29.754 fused_ordering(340) 00:18:29.754 fused_ordering(341) 00:18:29.754 fused_ordering(342) 00:18:29.754 fused_ordering(343) 00:18:29.754 fused_ordering(344) 00:18:29.754 fused_ordering(345) 00:18:29.754 fused_ordering(346) 00:18:29.754 fused_ordering(347) 00:18:29.754 fused_ordering(348) 00:18:29.754 fused_ordering(349) 00:18:29.754 fused_ordering(350) 00:18:29.754 fused_ordering(351) 00:18:29.754 fused_ordering(352) 00:18:29.754 fused_ordering(353) 00:18:29.754 fused_ordering(354) 00:18:29.754 fused_ordering(355) 00:18:29.754 fused_ordering(356) 00:18:29.754 fused_ordering(357) 00:18:29.754 fused_ordering(358) 00:18:29.754 fused_ordering(359) 00:18:29.754 fused_ordering(360) 00:18:29.754 fused_ordering(361) 00:18:29.754 fused_ordering(362) 00:18:29.754 fused_ordering(363) 00:18:29.754 fused_ordering(364) 00:18:29.754 fused_ordering(365) 00:18:29.754 fused_ordering(366) 00:18:29.754 fused_ordering(367) 00:18:29.754 fused_ordering(368) 00:18:29.754 fused_ordering(369) 00:18:29.754 fused_ordering(370) 00:18:29.754 fused_ordering(371) 00:18:29.754 fused_ordering(372) 00:18:29.754 fused_ordering(373) 00:18:29.754 fused_ordering(374) 00:18:29.754 fused_ordering(375) 00:18:29.754 fused_ordering(376) 00:18:29.754 fused_ordering(377) 00:18:29.754 fused_ordering(378) 00:18:29.754 fused_ordering(379) 00:18:29.754 fused_ordering(380) 00:18:29.755 fused_ordering(381) 00:18:29.755 fused_ordering(382) 00:18:29.755 fused_ordering(383) 00:18:29.755 fused_ordering(384) 00:18:29.755 fused_ordering(385) 00:18:29.755 fused_ordering(386) 00:18:29.755 fused_ordering(387) 00:18:29.755 fused_ordering(388) 00:18:29.755 fused_ordering(389) 00:18:29.755 fused_ordering(390) 00:18:29.755 fused_ordering(391) 00:18:29.755 fused_ordering(392) 00:18:29.755 fused_ordering(393) 00:18:29.755 fused_ordering(394) 00:18:29.755 fused_ordering(395) 00:18:29.755 fused_ordering(396) 00:18:29.755 fused_ordering(397) 00:18:29.755 fused_ordering(398) 00:18:29.755 fused_ordering(399) 00:18:29.755 fused_ordering(400) 00:18:29.755 fused_ordering(401) 00:18:29.755 fused_ordering(402) 00:18:29.755 fused_ordering(403) 00:18:29.755 fused_ordering(404) 00:18:29.755 fused_ordering(405) 00:18:29.755 fused_ordering(406) 00:18:29.755 fused_ordering(407) 00:18:29.755 fused_ordering(408) 00:18:29.755 fused_ordering(409) 00:18:29.755 fused_ordering(410) 00:18:30.016 fused_ordering(411) 00:18:30.016 fused_ordering(412) 00:18:30.016 fused_ordering(413) 00:18:30.016 fused_ordering(414) 00:18:30.016 fused_ordering(415) 00:18:30.016 fused_ordering(416) 00:18:30.016 fused_ordering(417) 00:18:30.016 fused_ordering(418) 00:18:30.016 fused_ordering(419) 00:18:30.016 fused_ordering(420) 00:18:30.016 fused_ordering(421) 00:18:30.016 fused_ordering(422) 00:18:30.016 fused_ordering(423) 00:18:30.016 fused_ordering(424) 00:18:30.016 fused_ordering(425) 00:18:30.016 fused_ordering(426) 00:18:30.016 fused_ordering(427) 00:18:30.016 fused_ordering(428) 00:18:30.016 fused_ordering(429) 00:18:30.016 fused_ordering(430) 00:18:30.016 fused_ordering(431) 00:18:30.016 fused_ordering(432) 00:18:30.016 fused_ordering(433) 00:18:30.016 fused_ordering(434) 00:18:30.016 fused_ordering(435) 00:18:30.016 fused_ordering(436) 00:18:30.016 fused_ordering(437) 00:18:30.016 fused_ordering(438) 00:18:30.016 fused_ordering(439) 00:18:30.016 fused_ordering(440) 00:18:30.016 fused_ordering(441) 00:18:30.016 fused_ordering(442) 00:18:30.016 fused_ordering(443) 00:18:30.016 fused_ordering(444) 00:18:30.016 fused_ordering(445) 00:18:30.016 fused_ordering(446) 00:18:30.016 fused_ordering(447) 00:18:30.016 fused_ordering(448) 00:18:30.016 fused_ordering(449) 00:18:30.016 fused_ordering(450) 00:18:30.016 fused_ordering(451) 00:18:30.016 fused_ordering(452) 00:18:30.016 fused_ordering(453) 00:18:30.016 fused_ordering(454) 00:18:30.016 fused_ordering(455) 00:18:30.016 fused_ordering(456) 00:18:30.016 fused_ordering(457) 00:18:30.016 fused_ordering(458) 00:18:30.016 fused_ordering(459) 00:18:30.016 fused_ordering(460) 00:18:30.016 fused_ordering(461) 00:18:30.016 fused_ordering(462) 00:18:30.016 fused_ordering(463) 00:18:30.016 fused_ordering(464) 00:18:30.016 fused_ordering(465) 00:18:30.016 fused_ordering(466) 00:18:30.016 fused_ordering(467) 00:18:30.016 fused_ordering(468) 00:18:30.016 fused_ordering(469) 00:18:30.016 fused_ordering(470) 00:18:30.016 fused_ordering(471) 00:18:30.016 fused_ordering(472) 00:18:30.016 fused_ordering(473) 00:18:30.016 fused_ordering(474) 00:18:30.016 fused_ordering(475) 00:18:30.016 fused_ordering(476) 00:18:30.016 fused_ordering(477) 00:18:30.016 fused_ordering(478) 00:18:30.016 fused_ordering(479) 00:18:30.016 fused_ordering(480) 00:18:30.016 fused_ordering(481) 00:18:30.016 fused_ordering(482) 00:18:30.016 fused_ordering(483) 00:18:30.016 fused_ordering(484) 00:18:30.016 fused_ordering(485) 00:18:30.016 fused_ordering(486) 00:18:30.016 fused_ordering(487) 00:18:30.016 fused_ordering(488) 00:18:30.016 fused_ordering(489) 00:18:30.016 fused_ordering(490) 00:18:30.016 fused_ordering(491) 00:18:30.016 fused_ordering(492) 00:18:30.016 fused_ordering(493) 00:18:30.016 fused_ordering(494) 00:18:30.016 fused_ordering(495) 00:18:30.016 fused_ordering(496) 00:18:30.016 fused_ordering(497) 00:18:30.016 fused_ordering(498) 00:18:30.016 fused_ordering(499) 00:18:30.016 fused_ordering(500) 00:18:30.016 fused_ordering(501) 00:18:30.016 fused_ordering(502) 00:18:30.016 fused_ordering(503) 00:18:30.016 fused_ordering(504) 00:18:30.016 fused_ordering(505) 00:18:30.016 fused_ordering(506) 00:18:30.016 fused_ordering(507) 00:18:30.016 fused_ordering(508) 00:18:30.016 fused_ordering(509) 00:18:30.016 fused_ordering(510) 00:18:30.016 fused_ordering(511) 00:18:30.016 fused_ordering(512) 00:18:30.016 fused_ordering(513) 00:18:30.016 fused_ordering(514) 00:18:30.016 fused_ordering(515) 00:18:30.016 fused_ordering(516) 00:18:30.016 fused_ordering(517) 00:18:30.016 fused_ordering(518) 00:18:30.016 fused_ordering(519) 00:18:30.016 fused_ordering(520) 00:18:30.016 fused_ordering(521) 00:18:30.016 fused_ordering(522) 00:18:30.016 fused_ordering(523) 00:18:30.016 fused_ordering(524) 00:18:30.016 fused_ordering(525) 00:18:30.016 fused_ordering(526) 00:18:30.016 fused_ordering(527) 00:18:30.016 fused_ordering(528) 00:18:30.016 fused_ordering(529) 00:18:30.016 fused_ordering(530) 00:18:30.016 fused_ordering(531) 00:18:30.016 fused_ordering(532) 00:18:30.016 fused_ordering(533) 00:18:30.016 fused_ordering(534) 00:18:30.016 fused_ordering(535) 00:18:30.016 fused_ordering(536) 00:18:30.016 fused_ordering(537) 00:18:30.016 fused_ordering(538) 00:18:30.016 fused_ordering(539) 00:18:30.016 fused_ordering(540) 00:18:30.016 fused_ordering(541) 00:18:30.016 fused_ordering(542) 00:18:30.016 fused_ordering(543) 00:18:30.016 fused_ordering(544) 00:18:30.016 fused_ordering(545) 00:18:30.016 fused_ordering(546) 00:18:30.016 fused_ordering(547) 00:18:30.016 fused_ordering(548) 00:18:30.016 fused_ordering(549) 00:18:30.016 fused_ordering(550) 00:18:30.016 fused_ordering(551) 00:18:30.016 fused_ordering(552) 00:18:30.016 fused_ordering(553) 00:18:30.016 fused_ordering(554) 00:18:30.016 fused_ordering(555) 00:18:30.016 fused_ordering(556) 00:18:30.016 fused_ordering(557) 00:18:30.016 fused_ordering(558) 00:18:30.016 fused_ordering(559) 00:18:30.016 fused_ordering(560) 00:18:30.016 fused_ordering(561) 00:18:30.016 fused_ordering(562) 00:18:30.016 fused_ordering(563) 00:18:30.016 fused_ordering(564) 00:18:30.016 fused_ordering(565) 00:18:30.016 fused_ordering(566) 00:18:30.016 fused_ordering(567) 00:18:30.016 fused_ordering(568) 00:18:30.017 fused_ordering(569) 00:18:30.017 fused_ordering(570) 00:18:30.017 fused_ordering(571) 00:18:30.017 fused_ordering(572) 00:18:30.017 fused_ordering(573) 00:18:30.017 fused_ordering(574) 00:18:30.017 fused_ordering(575) 00:18:30.017 fused_ordering(576) 00:18:30.017 fused_ordering(577) 00:18:30.017 fused_ordering(578) 00:18:30.017 fused_ordering(579) 00:18:30.017 fused_ordering(580) 00:18:30.017 fused_ordering(581) 00:18:30.017 fused_ordering(582) 00:18:30.017 fused_ordering(583) 00:18:30.017 fused_ordering(584) 00:18:30.017 fused_ordering(585) 00:18:30.017 fused_ordering(586) 00:18:30.017 fused_ordering(587) 00:18:30.017 fused_ordering(588) 00:18:30.017 fused_ordering(589) 00:18:30.017 fused_ordering(590) 00:18:30.017 fused_ordering(591) 00:18:30.017 fused_ordering(592) 00:18:30.017 fused_ordering(593) 00:18:30.017 fused_ordering(594) 00:18:30.017 fused_ordering(595) 00:18:30.017 fused_ordering(596) 00:18:30.017 fused_ordering(597) 00:18:30.017 fused_ordering(598) 00:18:30.017 fused_ordering(599) 00:18:30.017 fused_ordering(600) 00:18:30.017 fused_ordering(601) 00:18:30.017 fused_ordering(602) 00:18:30.017 fused_ordering(603) 00:18:30.017 fused_ordering(604) 00:18:30.017 fused_ordering(605) 00:18:30.017 fused_ordering(606) 00:18:30.017 fused_ordering(607) 00:18:30.017 fused_ordering(608) 00:18:30.017 fused_ordering(609) 00:18:30.017 fused_ordering(610) 00:18:30.017 fused_ordering(611) 00:18:30.017 fused_ordering(612) 00:18:30.017 fused_ordering(613) 00:18:30.017 fused_ordering(614) 00:18:30.017 fused_ordering(615) 00:18:30.589 fused_ordering(616) 00:18:30.589 fused_ordering(617) 00:18:30.589 fused_ordering(618) 00:18:30.589 fused_ordering(619) 00:18:30.589 fused_ordering(620) 00:18:30.589 fused_ordering(621) 00:18:30.589 fused_ordering(622) 00:18:30.589 fused_ordering(623) 00:18:30.589 fused_ordering(624) 00:18:30.589 fused_ordering(625) 00:18:30.589 fused_ordering(626) 00:18:30.589 fused_ordering(627) 00:18:30.589 fused_ordering(628) 00:18:30.589 fused_ordering(629) 00:18:30.589 fused_ordering(630) 00:18:30.589 fused_ordering(631) 00:18:30.589 fused_ordering(632) 00:18:30.589 fused_ordering(633) 00:18:30.589 fused_ordering(634) 00:18:30.589 fused_ordering(635) 00:18:30.589 fused_ordering(636) 00:18:30.589 fused_ordering(637) 00:18:30.589 fused_ordering(638) 00:18:30.589 fused_ordering(639) 00:18:30.589 fused_ordering(640) 00:18:30.589 fused_ordering(641) 00:18:30.589 fused_ordering(642) 00:18:30.589 fused_ordering(643) 00:18:30.589 fused_ordering(644) 00:18:30.589 fused_ordering(645) 00:18:30.589 fused_ordering(646) 00:18:30.589 fused_ordering(647) 00:18:30.589 fused_ordering(648) 00:18:30.589 fused_ordering(649) 00:18:30.589 fused_ordering(650) 00:18:30.589 fused_ordering(651) 00:18:30.589 fused_ordering(652) 00:18:30.589 fused_ordering(653) 00:18:30.589 fused_ordering(654) 00:18:30.589 fused_ordering(655) 00:18:30.589 fused_ordering(656) 00:18:30.589 fused_ordering(657) 00:18:30.589 fused_ordering(658) 00:18:30.589 fused_ordering(659) 00:18:30.589 fused_ordering(660) 00:18:30.589 fused_ordering(661) 00:18:30.589 fused_ordering(662) 00:18:30.589 fused_ordering(663) 00:18:30.589 fused_ordering(664) 00:18:30.589 fused_ordering(665) 00:18:30.589 fused_ordering(666) 00:18:30.589 fused_ordering(667) 00:18:30.589 fused_ordering(668) 00:18:30.589 fused_ordering(669) 00:18:30.589 fused_ordering(670) 00:18:30.589 fused_ordering(671) 00:18:30.589 fused_ordering(672) 00:18:30.589 fused_ordering(673) 00:18:30.589 fused_ordering(674) 00:18:30.589 fused_ordering(675) 00:18:30.589 fused_ordering(676) 00:18:30.589 fused_ordering(677) 00:18:30.589 fused_ordering(678) 00:18:30.589 fused_ordering(679) 00:18:30.589 fused_ordering(680) 00:18:30.589 fused_ordering(681) 00:18:30.589 fused_ordering(682) 00:18:30.589 fused_ordering(683) 00:18:30.589 fused_ordering(684) 00:18:30.589 fused_ordering(685) 00:18:30.589 fused_ordering(686) 00:18:30.589 fused_ordering(687) 00:18:30.589 fused_ordering(688) 00:18:30.589 fused_ordering(689) 00:18:30.589 fused_ordering(690) 00:18:30.589 fused_ordering(691) 00:18:30.589 fused_ordering(692) 00:18:30.589 fused_ordering(693) 00:18:30.589 fused_ordering(694) 00:18:30.589 fused_ordering(695) 00:18:30.589 fused_ordering(696) 00:18:30.589 fused_ordering(697) 00:18:30.589 fused_ordering(698) 00:18:30.589 fused_ordering(699) 00:18:30.589 fused_ordering(700) 00:18:30.589 fused_ordering(701) 00:18:30.589 fused_ordering(702) 00:18:30.589 fused_ordering(703) 00:18:30.589 fused_ordering(704) 00:18:30.589 fused_ordering(705) 00:18:30.589 fused_ordering(706) 00:18:30.589 fused_ordering(707) 00:18:30.589 fused_ordering(708) 00:18:30.589 fused_ordering(709) 00:18:30.589 fused_ordering(710) 00:18:30.589 fused_ordering(711) 00:18:30.589 fused_ordering(712) 00:18:30.589 fused_ordering(713) 00:18:30.589 fused_ordering(714) 00:18:30.589 fused_ordering(715) 00:18:30.589 fused_ordering(716) 00:18:30.589 fused_ordering(717) 00:18:30.589 fused_ordering(718) 00:18:30.589 fused_ordering(719) 00:18:30.589 fused_ordering(720) 00:18:30.589 fused_ordering(721) 00:18:30.589 fused_ordering(722) 00:18:30.589 fused_ordering(723) 00:18:30.589 fused_ordering(724) 00:18:30.589 fused_ordering(725) 00:18:30.589 fused_ordering(726) 00:18:30.589 fused_ordering(727) 00:18:30.589 fused_ordering(728) 00:18:30.589 fused_ordering(729) 00:18:30.589 fused_ordering(730) 00:18:30.589 fused_ordering(731) 00:18:30.589 fused_ordering(732) 00:18:30.589 fused_ordering(733) 00:18:30.589 fused_ordering(734) 00:18:30.589 fused_ordering(735) 00:18:30.589 fused_ordering(736) 00:18:30.589 fused_ordering(737) 00:18:30.589 fused_ordering(738) 00:18:30.589 fused_ordering(739) 00:18:30.589 fused_ordering(740) 00:18:30.589 fused_ordering(741) 00:18:30.589 fused_ordering(742) 00:18:30.589 fused_ordering(743) 00:18:30.589 fused_ordering(744) 00:18:30.589 fused_ordering(745) 00:18:30.589 fused_ordering(746) 00:18:30.589 fused_ordering(747) 00:18:30.589 fused_ordering(748) 00:18:30.589 fused_ordering(749) 00:18:30.589 fused_ordering(750) 00:18:30.589 fused_ordering(751) 00:18:30.589 fused_ordering(752) 00:18:30.589 fused_ordering(753) 00:18:30.589 fused_ordering(754) 00:18:30.589 fused_ordering(755) 00:18:30.589 fused_ordering(756) 00:18:30.589 fused_ordering(757) 00:18:30.589 fused_ordering(758) 00:18:30.589 fused_ordering(759) 00:18:30.589 fused_ordering(760) 00:18:30.589 fused_ordering(761) 00:18:30.589 fused_ordering(762) 00:18:30.589 fused_ordering(763) 00:18:30.589 fused_ordering(764) 00:18:30.589 fused_ordering(765) 00:18:30.589 fused_ordering(766) 00:18:30.589 fused_ordering(767) 00:18:30.589 fused_ordering(768) 00:18:30.589 fused_ordering(769) 00:18:30.589 fused_ordering(770) 00:18:30.589 fused_ordering(771) 00:18:30.589 fused_ordering(772) 00:18:30.589 fused_ordering(773) 00:18:30.589 fused_ordering(774) 00:18:30.589 fused_ordering(775) 00:18:30.589 fused_ordering(776) 00:18:30.589 fused_ordering(777) 00:18:30.589 fused_ordering(778) 00:18:30.589 fused_ordering(779) 00:18:30.589 fused_ordering(780) 00:18:30.589 fused_ordering(781) 00:18:30.589 fused_ordering(782) 00:18:30.589 fused_ordering(783) 00:18:30.589 fused_ordering(784) 00:18:30.589 fused_ordering(785) 00:18:30.589 fused_ordering(786) 00:18:30.589 fused_ordering(787) 00:18:30.589 fused_ordering(788) 00:18:30.589 fused_ordering(789) 00:18:30.589 fused_ordering(790) 00:18:30.589 fused_ordering(791) 00:18:30.589 fused_ordering(792) 00:18:30.589 fused_ordering(793) 00:18:30.589 fused_ordering(794) 00:18:30.589 fused_ordering(795) 00:18:30.589 fused_ordering(796) 00:18:30.589 fused_ordering(797) 00:18:30.589 fused_ordering(798) 00:18:30.589 fused_ordering(799) 00:18:30.589 fused_ordering(800) 00:18:30.589 fused_ordering(801) 00:18:30.589 fused_ordering(802) 00:18:30.589 fused_ordering(803) 00:18:30.589 fused_ordering(804) 00:18:30.589 fused_ordering(805) 00:18:30.589 fused_ordering(806) 00:18:30.589 fused_ordering(807) 00:18:30.589 fused_ordering(808) 00:18:30.589 fused_ordering(809) 00:18:30.589 fused_ordering(810) 00:18:30.589 fused_ordering(811) 00:18:30.589 fused_ordering(812) 00:18:30.589 fused_ordering(813) 00:18:30.589 fused_ordering(814) 00:18:30.589 fused_ordering(815) 00:18:30.589 fused_ordering(816) 00:18:30.589 fused_ordering(817) 00:18:30.589 fused_ordering(818) 00:18:30.589 fused_ordering(819) 00:18:30.589 fused_ordering(820) 00:18:31.165 fused_ordering(821) 00:18:31.165 fused_ordering(822) 00:18:31.165 fused_ordering(823) 00:18:31.165 fused_ordering(824) 00:18:31.165 fused_ordering(825) 00:18:31.165 fused_ordering(826) 00:18:31.165 fused_ordering(827) 00:18:31.165 fused_ordering(828) 00:18:31.165 fused_ordering(829) 00:18:31.165 fused_ordering(830) 00:18:31.165 fused_ordering(831) 00:18:31.165 fused_ordering(832) 00:18:31.165 fused_ordering(833) 00:18:31.165 fused_ordering(834) 00:18:31.165 fused_ordering(835) 00:18:31.165 fused_ordering(836) 00:18:31.165 fused_ordering(837) 00:18:31.165 fused_ordering(838) 00:18:31.165 fused_ordering(839) 00:18:31.165 fused_ordering(840) 00:18:31.165 fused_ordering(841) 00:18:31.165 fused_ordering(842) 00:18:31.165 fused_ordering(843) 00:18:31.165 fused_ordering(844) 00:18:31.165 fused_ordering(845) 00:18:31.165 fused_ordering(846) 00:18:31.165 fused_ordering(847) 00:18:31.165 fused_ordering(848) 00:18:31.165 fused_ordering(849) 00:18:31.165 fused_ordering(850) 00:18:31.165 fused_ordering(851) 00:18:31.165 fused_ordering(852) 00:18:31.165 fused_ordering(853) 00:18:31.165 fused_ordering(854) 00:18:31.165 fused_ordering(855) 00:18:31.165 fused_ordering(856) 00:18:31.165 fused_ordering(857) 00:18:31.165 fused_ordering(858) 00:18:31.165 fused_ordering(859) 00:18:31.165 fused_ordering(860) 00:18:31.165 fused_ordering(861) 00:18:31.165 fused_ordering(862) 00:18:31.165 fused_ordering(863) 00:18:31.165 fused_ordering(864) 00:18:31.165 fused_ordering(865) 00:18:31.165 fused_ordering(866) 00:18:31.165 fused_ordering(867) 00:18:31.165 fused_ordering(868) 00:18:31.165 fused_ordering(869) 00:18:31.165 fused_ordering(870) 00:18:31.165 fused_ordering(871) 00:18:31.165 fused_ordering(872) 00:18:31.165 fused_ordering(873) 00:18:31.165 fused_ordering(874) 00:18:31.165 fused_ordering(875) 00:18:31.165 fused_ordering(876) 00:18:31.165 fused_ordering(877) 00:18:31.165 fused_ordering(878) 00:18:31.165 fused_ordering(879) 00:18:31.165 fused_ordering(880) 00:18:31.165 fused_ordering(881) 00:18:31.165 fused_ordering(882) 00:18:31.165 fused_ordering(883) 00:18:31.165 fused_ordering(884) 00:18:31.165 fused_ordering(885) 00:18:31.165 fused_ordering(886) 00:18:31.165 fused_ordering(887) 00:18:31.165 fused_ordering(888) 00:18:31.165 fused_ordering(889) 00:18:31.165 fused_ordering(890) 00:18:31.165 fused_ordering(891) 00:18:31.165 fused_ordering(892) 00:18:31.165 fused_ordering(893) 00:18:31.165 fused_ordering(894) 00:18:31.165 fused_ordering(895) 00:18:31.165 fused_ordering(896) 00:18:31.165 fused_ordering(897) 00:18:31.165 fused_ordering(898) 00:18:31.165 fused_ordering(899) 00:18:31.165 fused_ordering(900) 00:18:31.165 fused_ordering(901) 00:18:31.165 fused_ordering(902) 00:18:31.166 fused_ordering(903) 00:18:31.166 fused_ordering(904) 00:18:31.166 fused_ordering(905) 00:18:31.166 fused_ordering(906) 00:18:31.166 fused_ordering(907) 00:18:31.166 fused_ordering(908) 00:18:31.166 fused_ordering(909) 00:18:31.166 fused_ordering(910) 00:18:31.166 fused_ordering(911) 00:18:31.166 fused_ordering(912) 00:18:31.166 fused_ordering(913) 00:18:31.166 fused_ordering(914) 00:18:31.166 fused_ordering(915) 00:18:31.166 fused_ordering(916) 00:18:31.166 fused_ordering(917) 00:18:31.166 fused_ordering(918) 00:18:31.166 fused_ordering(919) 00:18:31.166 fused_ordering(920) 00:18:31.166 fused_ordering(921) 00:18:31.166 fused_ordering(922) 00:18:31.166 fused_ordering(923) 00:18:31.166 fused_ordering(924) 00:18:31.166 fused_ordering(925) 00:18:31.166 fused_ordering(926) 00:18:31.166 fused_ordering(927) 00:18:31.166 fused_ordering(928) 00:18:31.166 fused_ordering(929) 00:18:31.166 fused_ordering(930) 00:18:31.166 fused_ordering(931) 00:18:31.166 fused_ordering(932) 00:18:31.166 fused_ordering(933) 00:18:31.166 fused_ordering(934) 00:18:31.166 fused_ordering(935) 00:18:31.166 fused_ordering(936) 00:18:31.166 fused_ordering(937) 00:18:31.166 fused_ordering(938) 00:18:31.166 fused_ordering(939) 00:18:31.166 fused_ordering(940) 00:18:31.166 fused_ordering(941) 00:18:31.166 fused_ordering(942) 00:18:31.166 fused_ordering(943) 00:18:31.166 fused_ordering(944) 00:18:31.166 fused_ordering(945) 00:18:31.166 fused_ordering(946) 00:18:31.166 fused_ordering(947) 00:18:31.166 fused_ordering(948) 00:18:31.166 fused_ordering(949) 00:18:31.166 fused_ordering(950) 00:18:31.166 fused_ordering(951) 00:18:31.166 fused_ordering(952) 00:18:31.166 fused_ordering(953) 00:18:31.166 fused_ordering(954) 00:18:31.166 fused_ordering(955) 00:18:31.166 fused_ordering(956) 00:18:31.166 fused_ordering(957) 00:18:31.166 fused_ordering(958) 00:18:31.166 fused_ordering(959) 00:18:31.166 fused_ordering(960) 00:18:31.166 fused_ordering(961) 00:18:31.166 fused_ordering(962) 00:18:31.166 fused_ordering(963) 00:18:31.166 fused_ordering(964) 00:18:31.166 fused_ordering(965) 00:18:31.166 fused_ordering(966) 00:18:31.166 fused_ordering(967) 00:18:31.166 fused_ordering(968) 00:18:31.166 fused_ordering(969) 00:18:31.166 fused_ordering(970) 00:18:31.166 fused_ordering(971) 00:18:31.166 fused_ordering(972) 00:18:31.166 fused_ordering(973) 00:18:31.166 fused_ordering(974) 00:18:31.166 fused_ordering(975) 00:18:31.166 fused_ordering(976) 00:18:31.166 fused_ordering(977) 00:18:31.166 fused_ordering(978) 00:18:31.166 fused_ordering(979) 00:18:31.166 fused_ordering(980) 00:18:31.166 fused_ordering(981) 00:18:31.166 fused_ordering(982) 00:18:31.166 fused_ordering(983) 00:18:31.166 fused_ordering(984) 00:18:31.166 fused_ordering(985) 00:18:31.166 fused_ordering(986) 00:18:31.166 fused_ordering(987) 00:18:31.166 fused_ordering(988) 00:18:31.166 fused_ordering(989) 00:18:31.166 fused_ordering(990) 00:18:31.166 fused_ordering(991) 00:18:31.166 fused_ordering(992) 00:18:31.166 fused_ordering(993) 00:18:31.166 fused_ordering(994) 00:18:31.166 fused_ordering(995) 00:18:31.166 fused_ordering(996) 00:18:31.166 fused_ordering(997) 00:18:31.166 fused_ordering(998) 00:18:31.166 fused_ordering(999) 00:18:31.166 fused_ordering(1000) 00:18:31.166 fused_ordering(1001) 00:18:31.166 fused_ordering(1002) 00:18:31.166 fused_ordering(1003) 00:18:31.166 fused_ordering(1004) 00:18:31.166 fused_ordering(1005) 00:18:31.166 fused_ordering(1006) 00:18:31.166 fused_ordering(1007) 00:18:31.166 fused_ordering(1008) 00:18:31.166 fused_ordering(1009) 00:18:31.166 fused_ordering(1010) 00:18:31.166 fused_ordering(1011) 00:18:31.166 fused_ordering(1012) 00:18:31.166 fused_ordering(1013) 00:18:31.166 fused_ordering(1014) 00:18:31.166 fused_ordering(1015) 00:18:31.166 fused_ordering(1016) 00:18:31.166 fused_ordering(1017) 00:18:31.166 fused_ordering(1018) 00:18:31.166 fused_ordering(1019) 00:18:31.166 fused_ordering(1020) 00:18:31.166 fused_ordering(1021) 00:18:31.166 fused_ordering(1022) 00:18:31.166 fused_ordering(1023) 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:31.427 rmmod nvme_tcp 00:18:31.427 rmmod nvme_fabrics 00:18:31.427 rmmod nvme_keyring 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2478397 ']' 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2478397 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2478397 ']' 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2478397 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478397 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478397' 00:18:31.427 killing process with pid 2478397 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2478397 00:18:31.427 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2478397 00:18:32.367 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:32.367 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:32.367 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:32.367 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:32.367 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:32.367 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:32.367 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:32.367 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:32.367 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:32.367 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.367 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.367 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.277 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:34.277 00:18:34.277 real 0m14.014s 00:18:34.277 user 0m7.935s 00:18:34.277 sys 0m6.993s 00:18:34.277 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.277 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.277 ************************************ 00:18:34.277 END TEST nvmf_fused_ordering 00:18:34.277 ************************************ 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.537 ************************************ 00:18:34.537 START TEST nvmf_ns_masking 00:18:34.537 ************************************ 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:34.537 * Looking for test storage... 00:18:34.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.537 --rc genhtml_branch_coverage=1 00:18:34.537 --rc genhtml_function_coverage=1 00:18:34.537 --rc genhtml_legend=1 00:18:34.537 --rc geninfo_all_blocks=1 00:18:34.537 --rc geninfo_unexecuted_blocks=1 00:18:34.537 00:18:34.537 ' 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.537 --rc genhtml_branch_coverage=1 00:18:34.537 --rc genhtml_function_coverage=1 00:18:34.537 --rc genhtml_legend=1 00:18:34.537 --rc geninfo_all_blocks=1 00:18:34.537 --rc geninfo_unexecuted_blocks=1 00:18:34.537 00:18:34.537 ' 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.537 --rc genhtml_branch_coverage=1 00:18:34.537 --rc genhtml_function_coverage=1 00:18:34.537 --rc genhtml_legend=1 00:18:34.537 --rc geninfo_all_blocks=1 00:18:34.537 --rc geninfo_unexecuted_blocks=1 00:18:34.537 00:18:34.537 ' 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.537 --rc genhtml_branch_coverage=1 00:18:34.537 --rc genhtml_function_coverage=1 00:18:34.537 --rc genhtml_legend=1 00:18:34.537 --rc geninfo_all_blocks=1 00:18:34.537 --rc geninfo_unexecuted_blocks=1 00:18:34.537 00:18:34.537 ' 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.537 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.538 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.538 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.538 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.538 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8158bef8-7c56-4358-97df-952c53f08388 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=5ce916f4-ade0-4a8c-866f-e2cda7ed7043 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=8aafc68a-ef2d-439f-867d-ea9f7ca7f661 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:34.798 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:34.799 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.799 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.799 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.799 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:34.799 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:34.799 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:34.799 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:42.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:42.941 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:42.941 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:42.941 Found net devices under 0000:31:00.0: cvl_0_0 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:42.941 Found net devices under 0000:31:00.1: cvl_0_1 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.941 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:42.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:18:42.941 00:18:42.941 --- 10.0.0.2 ping statistics --- 00:18:42.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.941 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:18:42.941 00:18:42.941 --- 10.0.0.1 ping statistics --- 00:18:42.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.941 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2483483 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2483483 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2483483 ']' 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.941 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:42.941 [2024-12-07 11:30:41.373487] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:42.941 [2024-12-07 11:30:41.373626] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.941 [2024-12-07 11:30:41.523062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.941 [2024-12-07 11:30:41.622541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.941 [2024-12-07 11:30:41.622584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.941 [2024-12-07 11:30:41.622595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.941 [2024-12-07 11:30:41.622606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.941 [2024-12-07 11:30:41.622617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.941 [2024-12-07 11:30:41.623850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.941 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.942 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:42.942 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:42.942 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.942 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:42.942 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.942 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:43.202 [2024-12-07 11:30:42.311695] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.202 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:43.202 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:43.202 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:43.202 Malloc1 00:18:43.463 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:43.463 Malloc2 00:18:43.463 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:43.724 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:43.984 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.984 [2024-12-07 11:30:43.303291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.984 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:43.984 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8aafc68a-ef2d-439f-867d-ea9f7ca7f661 -a 10.0.0.2 -s 4420 -i 4 00:18:44.245 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:44.245 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:44.245 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:44.245 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:44.245 11:30:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:46.156 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:46.156 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:46.156 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:46.156 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:46.156 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:46.156 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:46.156 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:46.156 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:46.415 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:46.415 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:46.415 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:46.415 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:46.415 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:46.415 [ 0]:0x1 00:18:46.415 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:46.415 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:46.415 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d8e8ce460b3440e4a40041fa8d141dca 00:18:46.415 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d8e8ce460b3440e4a40041fa8d141dca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:46.415 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:46.675 [ 0]:0x1 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d8e8ce460b3440e4a40041fa8d141dca 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d8e8ce460b3440e4a40041fa8d141dca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:46.675 [ 1]:0x2 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59f0570994f54fe0b9cec7ad17ba84d5 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59f0570994f54fe0b9cec7ad17ba84d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:46.675 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:46.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.935 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:46.935 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:47.288 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:47.288 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8aafc68a-ef2d-439f-867d-ea9f7ca7f661 -a 10.0.0.2 -s 4420 -i 4 00:18:47.288 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:47.288 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:47.288 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.288 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:47.288 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:47.288 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:49.850 [ 0]:0x2 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:49.850 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59f0570994f54fe0b9cec7ad17ba84d5 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59f0570994f54fe0b9cec7ad17ba84d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:49.851 [ 0]:0x1 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d8e8ce460b3440e4a40041fa8d141dca 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d8e8ce460b3440e4a40041fa8d141dca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:49.851 [ 1]:0x2 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:49.851 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:49.851 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59f0570994f54fe0b9cec7ad17ba84d5 00:18:49.851 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59f0570994f54fe0b9cec7ad17ba84d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:49.851 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:50.112 [ 0]:0x2 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59f0570994f54fe0b9cec7ad17ba84d5 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59f0570994f54fe0b9cec7ad17ba84d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:50.112 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.373 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:50.373 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:50.373 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8aafc68a-ef2d-439f-867d-ea9f7ca7f661 -a 10.0.0.2 -s 4420 -i 4 00:18:50.633 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:50.633 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:50.634 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:50.634 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:50.634 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:50.634 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:52.541 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:52.541 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:52.541 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:52.541 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:52.541 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.541 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:52.541 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:52.541 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:52.800 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:52.800 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:52.800 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:52.800 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:52.800 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:52.800 [ 0]:0x1 00:18:52.800 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:52.800 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:52.800 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d8e8ce460b3440e4a40041fa8d141dca 00:18:52.800 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d8e8ce460b3440e4a40041fa8d141dca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:52.800 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:52.800 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:52.800 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:52.800 [ 1]:0x2 00:18:52.800 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:52.800 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:52.800 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59f0570994f54fe0b9cec7ad17ba84d5 00:18:52.800 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59f0570994f54fe0b9cec7ad17ba84d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:52.800 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:53.064 [ 0]:0x2 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59f0570994f54fe0b9cec7ad17ba84d5 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59f0570994f54fe0b9cec7ad17ba84d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:53.064 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:53.326 [2024-12-07 11:30:52.506002] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:53.326 request: 00:18:53.326 { 00:18:53.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.326 "nsid": 2, 00:18:53.326 "host": "nqn.2016-06.io.spdk:host1", 00:18:53.326 "method": "nvmf_ns_remove_host", 00:18:53.326 "req_id": 1 00:18:53.326 } 00:18:53.326 Got JSON-RPC error response 00:18:53.326 response: 00:18:53.326 { 00:18:53.326 "code": -32602, 00:18:53.326 "message": "Invalid parameters" 00:18:53.326 } 00:18:53.326 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:53.326 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.326 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.326 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.326 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:53.326 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:53.326 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:53.326 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:53.326 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.326 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:53.326 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:53.327 [ 0]:0x2 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59f0570994f54fe0b9cec7ad17ba84d5 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59f0570994f54fe0b9cec7ad17ba84d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:53.327 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:53.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:53.586 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2485981 00:18:53.586 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.586 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:53.586 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2485981 /var/tmp/host.sock 00:18:53.586 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2485981 ']' 00:18:53.586 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:53.586 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.586 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:53.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:53.586 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.586 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:53.586 [2024-12-07 11:30:52.792077] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:53.586 [2024-12-07 11:30:52.792184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485981 ] 00:18:53.586 [2024-12-07 11:30:52.932384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.845 [2024-12-07 11:30:53.028917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.428 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.428 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:54.428 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:54.688 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:54.688 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8158bef8-7c56-4358-97df-952c53f08388 00:18:54.688 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:54.688 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8158BEF87C56435897DF952C53F08388 -i 00:18:54.948 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 5ce916f4-ade0-4a8c-866f-e2cda7ed7043 00:18:54.948 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:54.948 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5CE916F4ADE04A8C866FE2CDA7ED7043 -i 00:18:55.208 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:55.208 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:55.468 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:55.468 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:55.728 nvme0n1 00:18:55.728 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:55.728 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:55.991 nvme1n2 00:18:55.991 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:55.991 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:55.991 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:55.991 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:55.991 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:56.251 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:56.251 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:56.251 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:56.251 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:56.251 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8158bef8-7c56-4358-97df-952c53f08388 == \8\1\5\8\b\e\f\8\-\7\c\5\6\-\4\3\5\8\-\9\7\d\f\-\9\5\2\c\5\3\f\0\8\3\8\8 ]] 00:18:56.251 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:56.251 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:56.251 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:56.511 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 5ce916f4-ade0-4a8c-866f-e2cda7ed7043 == \5\c\e\9\1\6\f\4\-\a\d\e\0\-\4\a\8\c\-\8\6\6\f\-\e\2\c\d\a\7\e\d\7\0\4\3 ]] 00:18:56.511 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:56.770 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:56.770 [2024-12-07 11:30:56.013260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:2 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.770 [2024-12-07 11:30:56.013306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:56.771 [2024-12-07 11:30:56.013327] nvme_ns.c: 287:nvme_ctrlr_identify_id_desc: *WARNING*: Failed to retrieve NS ID Descriptor List 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 8158bef8-7c56-4358-97df-952c53f08388 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8158BEF87C56435897DF952C53F08388 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8158BEF87C56435897DF952C53F08388 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:56.771 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8158BEF87C56435897DF952C53F08388 00:18:57.031 [2024-12-07 11:30:56.213242] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:57.031 [2024-12-07 11:30:56.213286] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:57.031 [2024-12-07 11:30:56.213303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.031 request: 00:18:57.031 { 00:18:57.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.031 "namespace": { 00:18:57.031 "bdev_name": "invalid", 00:18:57.031 "nsid": 1, 00:18:57.031 "nguid": "8158BEF87C56435897DF952C53F08388", 00:18:57.031 "no_auto_visible": false, 00:18:57.031 "hide_metadata": false 00:18:57.031 }, 00:18:57.031 "method": "nvmf_subsystem_add_ns", 00:18:57.031 "req_id": 1 00:18:57.031 } 00:18:57.031 Got JSON-RPC error response 00:18:57.031 response: 00:18:57.031 { 00:18:57.031 "code": -32602, 00:18:57.031 "message": "Invalid parameters" 00:18:57.031 } 00:18:57.031 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:57.031 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:57.031 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:57.031 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:57.031 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 8158bef8-7c56-4358-97df-952c53f08388 00:18:57.031 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:57.031 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8158BEF87C56435897DF952C53F08388 -i 00:18:57.290 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:59.202 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:59.202 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:59.202 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:59.463 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:59.463 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2485981 00:18:59.463 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2485981 ']' 00:18:59.463 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2485981 00:18:59.463 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:59.463 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.463 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2485981 00:18:59.463 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:59.463 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:59.463 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2485981' 00:18:59.463 killing process with pid 2485981 00:18:59.463 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2485981 00:18:59.463 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2485981 00:19:00.848 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:00.848 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:00.848 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:00.848 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:00.848 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:00.848 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:00.848 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:00.848 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:00.848 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:00.848 rmmod nvme_tcp 00:19:00.848 rmmod nvme_fabrics 00:19:00.848 rmmod nvme_keyring 00:19:00.848 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:00.848 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:00.848 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:00.848 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2483483 ']' 00:19:00.848 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2483483 00:19:00.848 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2483483 ']' 00:19:00.848 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2483483 00:19:00.848 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:00.849 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.849 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2483483 00:19:00.849 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.849 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.849 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2483483' 00:19:00.849 killing process with pid 2483483 00:19:00.849 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2483483 00:19:00.849 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2483483 00:19:01.790 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:01.790 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:01.790 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:01.790 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:01.790 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:01.790 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:01.790 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:01.790 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:01.790 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:01.790 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.790 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.790 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:04.333 00:19:04.333 real 0m29.516s 00:19:04.333 user 0m33.790s 00:19:04.333 sys 0m8.187s 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:04.333 ************************************ 00:19:04.333 END TEST nvmf_ns_masking 00:19:04.333 ************************************ 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:04.333 ************************************ 00:19:04.333 START TEST nvmf_nvme_cli 00:19:04.333 ************************************ 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:04.333 * Looking for test storage... 00:19:04.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:04.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.333 --rc genhtml_branch_coverage=1 00:19:04.333 --rc genhtml_function_coverage=1 00:19:04.333 --rc genhtml_legend=1 00:19:04.333 --rc geninfo_all_blocks=1 00:19:04.333 --rc geninfo_unexecuted_blocks=1 00:19:04.333 00:19:04.333 ' 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:04.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.333 --rc genhtml_branch_coverage=1 00:19:04.333 --rc genhtml_function_coverage=1 00:19:04.333 --rc genhtml_legend=1 00:19:04.333 --rc geninfo_all_blocks=1 00:19:04.333 --rc geninfo_unexecuted_blocks=1 00:19:04.333 00:19:04.333 ' 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:04.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.333 --rc genhtml_branch_coverage=1 00:19:04.333 --rc genhtml_function_coverage=1 00:19:04.333 --rc genhtml_legend=1 00:19:04.333 --rc geninfo_all_blocks=1 00:19:04.333 --rc geninfo_unexecuted_blocks=1 00:19:04.333 00:19:04.333 ' 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:04.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.333 --rc genhtml_branch_coverage=1 00:19:04.333 --rc genhtml_function_coverage=1 00:19:04.333 --rc genhtml_legend=1 00:19:04.333 --rc geninfo_all_blocks=1 00:19:04.333 --rc geninfo_unexecuted_blocks=1 00:19:04.333 00:19:04.333 ' 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.333 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:04.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:04.334 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:12.470 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:12.471 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:12.471 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:12.471 Found net devices under 0000:31:00.0: cvl_0_0 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:12.471 Found net devices under 0000:31:00.1: cvl_0_1 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:12.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:19:12.471 00:19:12.471 --- 10.0.0.2 ping statistics --- 00:19:12.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.471 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:12.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:19:12.471 00:19:12.471 --- 10.0.0.1 ping statistics --- 00:19:12.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.471 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:12.471 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2491770 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2491770 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2491770 ']' 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.472 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:12.472 [2024-12-07 11:31:11.013581] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:12.472 [2024-12-07 11:31:11.013709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.472 [2024-12-07 11:31:11.165302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.472 [2024-12-07 11:31:11.270920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.472 [2024-12-07 11:31:11.270961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.472 [2024-12-07 11:31:11.270974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.472 [2024-12-07 11:31:11.270985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.472 [2024-12-07 11:31:11.270994] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.472 [2024-12-07 11:31:11.273166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.472 [2024-12-07 11:31:11.273249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.472 [2024-12-07 11:31:11.273364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.472 [2024-12-07 11:31:11.273387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.472 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.472 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:12.472 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:12.472 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:12.472 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:12.472 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.472 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:12.472 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.472 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:12.732 [2024-12-07 11:31:11.824235] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:12.732 Malloc0 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:12.732 Malloc1 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.732 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:12.732 [2024-12-07 11:31:12.002546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.732 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.732 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:12.732 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.732 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:12.732 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.732 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:19:12.993 00:19:12.993 Discovery Log Number of Records 2, Generation counter 2 00:19:12.993 =====Discovery Log Entry 0====== 00:19:12.993 trtype: tcp 00:19:12.993 adrfam: ipv4 00:19:12.993 subtype: current discovery subsystem 00:19:12.993 treq: not required 00:19:12.993 portid: 0 00:19:12.993 trsvcid: 4420 00:19:12.993 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:12.993 traddr: 10.0.0.2 00:19:12.993 eflags: explicit discovery connections, duplicate discovery information 00:19:12.993 sectype: none 00:19:12.993 =====Discovery Log Entry 1====== 00:19:12.993 trtype: tcp 00:19:12.993 adrfam: ipv4 00:19:12.993 subtype: nvme subsystem 00:19:12.993 treq: not required 00:19:12.993 portid: 0 00:19:12.993 trsvcid: 4420 00:19:12.993 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:12.993 traddr: 10.0.0.2 00:19:12.993 eflags: none 00:19:12.993 sectype: none 00:19:12.993 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:12.993 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:12.993 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:12.993 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:12.993 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:12.993 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:12.993 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:12.993 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:12.993 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:12.993 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:12.993 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:14.903 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:14.903 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:14.903 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:14.903 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:14.903 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:14.903 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:16.812 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:16.812 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:16.812 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:16.813 /dev/nvme0n2 ]] 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:16.813 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:16.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:16.813 rmmod nvme_tcp 00:19:16.813 rmmod nvme_fabrics 00:19:16.813 rmmod nvme_keyring 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2491770 ']' 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2491770 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2491770 ']' 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2491770 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:16.813 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.074 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2491770 00:19:17.074 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.074 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.074 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2491770' 00:19:17.074 killing process with pid 2491770 00:19:17.074 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2491770 00:19:17.074 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2491770 00:19:18.017 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:18.017 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:18.017 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:18.017 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:18.017 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:18.017 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:18.017 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:18.017 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:18.017 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:18.017 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.017 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.017 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:20.568 00:19:20.568 real 0m16.030s 00:19:20.568 user 0m25.226s 00:19:20.568 sys 0m6.397s 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.568 ************************************ 00:19:20.568 END TEST nvmf_nvme_cli 00:19:20.568 ************************************ 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:20.568 ************************************ 00:19:20.568 START TEST nvmf_auth_target 00:19:20.568 ************************************ 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:20.568 * Looking for test storage... 00:19:20.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:20.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.568 --rc genhtml_branch_coverage=1 00:19:20.568 --rc genhtml_function_coverage=1 00:19:20.568 --rc genhtml_legend=1 00:19:20.568 --rc geninfo_all_blocks=1 00:19:20.568 --rc geninfo_unexecuted_blocks=1 00:19:20.568 00:19:20.568 ' 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:20.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.568 --rc genhtml_branch_coverage=1 00:19:20.568 --rc genhtml_function_coverage=1 00:19:20.568 --rc genhtml_legend=1 00:19:20.568 --rc geninfo_all_blocks=1 00:19:20.568 --rc geninfo_unexecuted_blocks=1 00:19:20.568 00:19:20.568 ' 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:20.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.568 --rc genhtml_branch_coverage=1 00:19:20.568 --rc genhtml_function_coverage=1 00:19:20.568 --rc genhtml_legend=1 00:19:20.568 --rc geninfo_all_blocks=1 00:19:20.568 --rc geninfo_unexecuted_blocks=1 00:19:20.568 00:19:20.568 ' 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:20.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.568 --rc genhtml_branch_coverage=1 00:19:20.568 --rc genhtml_function_coverage=1 00:19:20.568 --rc genhtml_legend=1 00:19:20.568 --rc geninfo_all_blocks=1 00:19:20.568 --rc geninfo_unexecuted_blocks=1 00:19:20.568 00:19:20.568 ' 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.568 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:20.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:20.569 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:28.715 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:28.715 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:28.715 Found net devices under 0000:31:00.0: cvl_0_0 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:28.715 Found net devices under 0000:31:00.1: cvl_0_1 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.715 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:28.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:19:28.716 00:19:28.716 --- 10.0.0.2 ping statistics --- 00:19:28.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.716 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:28.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:19:28.716 00:19:28.716 --- 10.0.0.1 ping statistics --- 00:19:28.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.716 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2497222 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2497222 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2497222 ']' 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.716 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2497573 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2de1e94f51236fc4671093221076b4cc813de13c17b3d1a5 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1ob 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2de1e94f51236fc4671093221076b4cc813de13c17b3d1a5 0 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2de1e94f51236fc4671093221076b4cc813de13c17b3d1a5 0 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2de1e94f51236fc4671093221076b4cc813de13c17b3d1a5 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1ob 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1ob 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.1ob 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1ac6eb29b387e4fbd1fb9752298c4af5fd7c6b188bf0855dd03d11b10083107a 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5N7 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1ac6eb29b387e4fbd1fb9752298c4af5fd7c6b188bf0855dd03d11b10083107a 3 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1ac6eb29b387e4fbd1fb9752298c4af5fd7c6b188bf0855dd03d11b10083107a 3 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1ac6eb29b387e4fbd1fb9752298c4af5fd7c6b188bf0855dd03d11b10083107a 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5N7 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5N7 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.5N7 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.716 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:28.717 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:28.717 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:28.717 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:28.717 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8633d3e840e566ce49119e64a6e03585 00:19:28.717 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:28.717 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.YlX 00:19:28.717 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8633d3e840e566ce49119e64a6e03585 1 00:19:28.717 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8633d3e840e566ce49119e64a6e03585 1 00:19:28.717 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:28.717 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:28.717 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8633d3e840e566ce49119e64a6e03585 00:19:28.717 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:28.717 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.YlX 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.YlX 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.YlX 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b7da76e1c1b80d68774c24884b10394b08a1f38b9b5a0ec0 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.z4m 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b7da76e1c1b80d68774c24884b10394b08a1f38b9b5a0ec0 2 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b7da76e1c1b80d68774c24884b10394b08a1f38b9b5a0ec0 2 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b7da76e1c1b80d68774c24884b10394b08a1f38b9b5a0ec0 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:28.717 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.z4m 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.z4m 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.z4m 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3dcec42698b40d96fa573d8f3163442dc4df9452cb9f2c88 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.630 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3dcec42698b40d96fa573d8f3163442dc4df9452cb9f2c88 2 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3dcec42698b40d96fa573d8f3163442dc4df9452cb9f2c88 2 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3dcec42698b40d96fa573d8f3163442dc4df9452cb9f2c88 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.630 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.630 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.630 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f9f362523900c19d58b9a98ad396d029 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.cRt 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f9f362523900c19d58b9a98ad396d029 1 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f9f362523900c19d58b9a98ad396d029 1 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f9f362523900c19d58b9a98ad396d029 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.cRt 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.cRt 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.cRt 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=454cdb338e98d74691ecc8cea311dc0ec0dfadb6abb040bad86f4b7397e38971 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.VHi 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 454cdb338e98d74691ecc8cea311dc0ec0dfadb6abb040bad86f4b7397e38971 3 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 454cdb338e98d74691ecc8cea311dc0ec0dfadb6abb040bad86f4b7397e38971 3 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=454cdb338e98d74691ecc8cea311dc0ec0dfadb6abb040bad86f4b7397e38971 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.VHi 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.VHi 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.VHi 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2497222 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2497222 ']' 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.979 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.241 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.241 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:29.241 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2497573 /var/tmp/host.sock 00:19:29.241 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2497573 ']' 00:19:29.241 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:29.241 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.241 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:29.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:29.241 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.241 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.502 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.502 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:29.502 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:29.502 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.502 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.502 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.502 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:29.502 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1ob 00:19:29.502 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.502 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.502 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.502 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1ob 00:19:29.502 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1ob 00:19:29.763 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.5N7 ]] 00:19:29.763 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5N7 00:19:29.763 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.763 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.763 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.763 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5N7 00:19:29.763 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5N7 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YlX 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.YlX 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.YlX 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.z4m ]] 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z4m 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z4m 00:19:30.024 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z4m 00:19:30.285 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:30.285 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.630 00:19:30.285 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.285 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.285 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.285 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.630 00:19:30.285 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.630 00:19:30.546 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.cRt ]] 00:19:30.546 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cRt 00:19:30.546 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.546 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.546 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.546 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cRt 00:19:30.546 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cRt 00:19:30.806 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:30.806 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.VHi 00:19:30.806 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.806 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.806 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.806 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.VHi 00:19:30.806 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.VHi 00:19:30.806 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:30.806 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:30.806 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.806 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.806 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:30.806 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:31.066 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:31.066 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.066 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:31.066 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:31.066 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:31.066 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.066 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.066 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.066 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.066 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.066 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.066 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.066 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.326 00:19:31.326 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.326 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.326 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.326 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.326 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.326 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.326 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.326 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.326 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.326 { 00:19:31.326 "cntlid": 1, 00:19:31.326 "qid": 0, 00:19:31.326 "state": "enabled", 00:19:31.326 "thread": "nvmf_tgt_poll_group_000", 00:19:31.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:31.326 "listen_address": { 00:19:31.326 "trtype": "TCP", 00:19:31.326 "adrfam": "IPv4", 00:19:31.326 "traddr": "10.0.0.2", 00:19:31.326 "trsvcid": "4420" 00:19:31.326 }, 00:19:31.326 "peer_address": { 00:19:31.326 "trtype": "TCP", 00:19:31.326 "adrfam": "IPv4", 00:19:31.326 "traddr": "10.0.0.1", 00:19:31.326 "trsvcid": "39906" 00:19:31.326 }, 00:19:31.326 "auth": { 00:19:31.326 "state": "completed", 00:19:31.326 "digest": "sha256", 00:19:31.326 "dhgroup": "null" 00:19:31.326 } 00:19:31.326 } 00:19:31.326 ]' 00:19:31.326 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.587 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.587 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.587 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:31.587 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.587 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.587 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.587 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.847 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:19:31.847 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:19:32.417 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.417 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.417 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.417 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.678 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.939 00:19:32.939 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.939 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.940 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.200 { 00:19:33.200 "cntlid": 3, 00:19:33.200 "qid": 0, 00:19:33.200 "state": "enabled", 00:19:33.200 "thread": "nvmf_tgt_poll_group_000", 00:19:33.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:33.200 "listen_address": { 00:19:33.200 "trtype": "TCP", 00:19:33.200 "adrfam": "IPv4", 00:19:33.200 "traddr": "10.0.0.2", 00:19:33.200 "trsvcid": "4420" 00:19:33.200 }, 00:19:33.200 "peer_address": { 00:19:33.200 "trtype": "TCP", 00:19:33.200 "adrfam": "IPv4", 00:19:33.200 "traddr": "10.0.0.1", 00:19:33.200 "trsvcid": "46952" 00:19:33.200 }, 00:19:33.200 "auth": { 00:19:33.200 "state": "completed", 00:19:33.200 "digest": "sha256", 00:19:33.200 "dhgroup": "null" 00:19:33.200 } 00:19:33.200 } 00:19:33.200 ]' 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.200 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.461 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:19:33.461 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:19:34.401 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.402 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.662 00:19:34.662 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.662 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.662 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.922 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.922 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.922 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.922 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.922 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.922 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.922 { 00:19:34.922 "cntlid": 5, 00:19:34.922 "qid": 0, 00:19:34.922 "state": "enabled", 00:19:34.922 "thread": "nvmf_tgt_poll_group_000", 00:19:34.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:34.923 "listen_address": { 00:19:34.923 "trtype": "TCP", 00:19:34.923 "adrfam": "IPv4", 00:19:34.923 "traddr": "10.0.0.2", 00:19:34.923 "trsvcid": "4420" 00:19:34.923 }, 00:19:34.923 "peer_address": { 00:19:34.923 "trtype": "TCP", 00:19:34.923 "adrfam": "IPv4", 00:19:34.923 "traddr": "10.0.0.1", 00:19:34.923 "trsvcid": "46970" 00:19:34.923 }, 00:19:34.923 "auth": { 00:19:34.923 "state": "completed", 00:19:34.923 "digest": "sha256", 00:19:34.923 "dhgroup": "null" 00:19:34.923 } 00:19:34.923 } 00:19:34.923 ]' 00:19:34.923 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.923 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.923 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.923 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:34.923 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.923 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.923 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.923 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.183 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:19:35.183 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.381 00:19:36.381 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.381 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.381 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.641 { 00:19:36.641 "cntlid": 7, 00:19:36.641 "qid": 0, 00:19:36.641 "state": "enabled", 00:19:36.641 "thread": "nvmf_tgt_poll_group_000", 00:19:36.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:36.641 "listen_address": { 00:19:36.641 "trtype": "TCP", 00:19:36.641 "adrfam": "IPv4", 00:19:36.641 "traddr": "10.0.0.2", 00:19:36.641 "trsvcid": "4420" 00:19:36.641 }, 00:19:36.641 "peer_address": { 00:19:36.641 "trtype": "TCP", 00:19:36.641 "adrfam": "IPv4", 00:19:36.641 "traddr": "10.0.0.1", 00:19:36.641 "trsvcid": "46992" 00:19:36.641 }, 00:19:36.641 "auth": { 00:19:36.641 "state": "completed", 00:19:36.641 "digest": "sha256", 00:19:36.641 "dhgroup": "null" 00:19:36.641 } 00:19:36.641 } 00:19:36.641 ]' 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.641 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.901 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:19:36.901 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:19:37.841 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.841 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:37.841 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.841 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.841 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.841 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.841 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.841 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.841 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.841 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:37.841 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.841 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.841 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:37.841 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:37.841 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.841 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.841 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.841 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.841 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.841 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.841 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.841 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.101 00:19:38.101 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.101 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.101 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.362 { 00:19:38.362 "cntlid": 9, 00:19:38.362 "qid": 0, 00:19:38.362 "state": "enabled", 00:19:38.362 "thread": "nvmf_tgt_poll_group_000", 00:19:38.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:38.362 "listen_address": { 00:19:38.362 "trtype": "TCP", 00:19:38.362 "adrfam": "IPv4", 00:19:38.362 "traddr": "10.0.0.2", 00:19:38.362 "trsvcid": "4420" 00:19:38.362 }, 00:19:38.362 "peer_address": { 00:19:38.362 "trtype": "TCP", 00:19:38.362 "adrfam": "IPv4", 00:19:38.362 "traddr": "10.0.0.1", 00:19:38.362 "trsvcid": "47028" 00:19:38.362 }, 00:19:38.362 "auth": { 00:19:38.362 "state": "completed", 00:19:38.362 "digest": "sha256", 00:19:38.362 "dhgroup": "ffdhe2048" 00:19:38.362 } 00:19:38.362 } 00:19:38.362 ]' 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.362 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.623 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:19:38.623 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.564 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.826 00:19:39.826 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.826 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.826 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.088 { 00:19:40.088 "cntlid": 11, 00:19:40.088 "qid": 0, 00:19:40.088 "state": "enabled", 00:19:40.088 "thread": "nvmf_tgt_poll_group_000", 00:19:40.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:40.088 "listen_address": { 00:19:40.088 "trtype": "TCP", 00:19:40.088 "adrfam": "IPv4", 00:19:40.088 "traddr": "10.0.0.2", 00:19:40.088 "trsvcid": "4420" 00:19:40.088 }, 00:19:40.088 "peer_address": { 00:19:40.088 "trtype": "TCP", 00:19:40.088 "adrfam": "IPv4", 00:19:40.088 "traddr": "10.0.0.1", 00:19:40.088 "trsvcid": "47050" 00:19:40.088 }, 00:19:40.088 "auth": { 00:19:40.088 "state": "completed", 00:19:40.088 "digest": "sha256", 00:19:40.088 "dhgroup": "ffdhe2048" 00:19:40.088 } 00:19:40.088 } 00:19:40.088 ]' 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.088 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.349 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:19:40.349 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:19:40.921 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.921 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:40.921 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.921 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.921 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.921 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.921 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.921 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:41.183 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:41.183 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.183 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.183 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:41.183 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:41.183 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.183 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.183 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.183 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.183 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.183 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.183 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.183 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.503 00:19:41.503 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.503 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.503 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.818 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.818 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.818 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.818 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.818 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.818 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.818 { 00:19:41.818 "cntlid": 13, 00:19:41.818 "qid": 0, 00:19:41.818 "state": "enabled", 00:19:41.818 "thread": "nvmf_tgt_poll_group_000", 00:19:41.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:41.818 "listen_address": { 00:19:41.818 "trtype": "TCP", 00:19:41.818 "adrfam": "IPv4", 00:19:41.818 "traddr": "10.0.0.2", 00:19:41.818 "trsvcid": "4420" 00:19:41.818 }, 00:19:41.818 "peer_address": { 00:19:41.818 "trtype": "TCP", 00:19:41.818 "adrfam": "IPv4", 00:19:41.818 "traddr": "10.0.0.1", 00:19:41.818 "trsvcid": "47086" 00:19:41.818 }, 00:19:41.818 "auth": { 00:19:41.818 "state": "completed", 00:19:41.818 "digest": "sha256", 00:19:41.818 "dhgroup": "ffdhe2048" 00:19:41.818 } 00:19:41.818 } 00:19:41.818 ]' 00:19:41.818 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.818 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.818 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.818 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:41.818 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.818 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.818 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.818 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.093 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:19:42.093 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:19:42.666 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.666 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:42.666 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.666 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.666 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.666 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.666 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.666 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.928 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:42.928 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.928 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.929 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:42.929 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:42.929 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.929 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:42.929 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.929 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.929 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.929 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:42.929 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.929 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.189 00:19:43.189 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.189 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.189 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.189 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.190 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.190 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.190 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.190 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.190 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.190 { 00:19:43.190 "cntlid": 15, 00:19:43.190 "qid": 0, 00:19:43.190 "state": "enabled", 00:19:43.190 "thread": "nvmf_tgt_poll_group_000", 00:19:43.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:43.190 "listen_address": { 00:19:43.190 "trtype": "TCP", 00:19:43.190 "adrfam": "IPv4", 00:19:43.190 "traddr": "10.0.0.2", 00:19:43.190 "trsvcid": "4420" 00:19:43.190 }, 00:19:43.190 "peer_address": { 00:19:43.190 "trtype": "TCP", 00:19:43.190 "adrfam": "IPv4", 00:19:43.190 "traddr": "10.0.0.1", 00:19:43.190 "trsvcid": "43892" 00:19:43.190 }, 00:19:43.190 "auth": { 00:19:43.190 "state": "completed", 00:19:43.190 "digest": "sha256", 00:19:43.190 "dhgroup": "ffdhe2048" 00:19:43.190 } 00:19:43.190 } 00:19:43.190 ]' 00:19:43.190 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.450 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.450 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.450 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.450 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.450 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.450 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.450 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.711 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:19:43.711 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:19:44.283 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.283 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:44.283 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.283 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.283 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.283 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.283 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.283 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.283 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.544 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:44.544 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.544 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.544 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:44.544 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:44.544 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.544 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.544 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.544 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.544 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.544 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.544 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.544 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.812 00:19:44.812 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.812 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.812 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.812 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.812 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.812 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.812 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.812 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.812 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.812 { 00:19:44.812 "cntlid": 17, 00:19:44.812 "qid": 0, 00:19:44.812 "state": "enabled", 00:19:44.812 "thread": "nvmf_tgt_poll_group_000", 00:19:44.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:44.812 "listen_address": { 00:19:44.812 "trtype": "TCP", 00:19:44.812 "adrfam": "IPv4", 00:19:44.812 "traddr": "10.0.0.2", 00:19:44.812 "trsvcid": "4420" 00:19:44.812 }, 00:19:44.812 "peer_address": { 00:19:44.812 "trtype": "TCP", 00:19:44.812 "adrfam": "IPv4", 00:19:44.812 "traddr": "10.0.0.1", 00:19:44.812 "trsvcid": "43930" 00:19:44.812 }, 00:19:44.812 "auth": { 00:19:44.812 "state": "completed", 00:19:44.812 "digest": "sha256", 00:19:44.812 "dhgroup": "ffdhe3072" 00:19:44.812 } 00:19:44.812 } 00:19:44.812 ]' 00:19:44.812 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.074 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.074 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.074 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.074 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.074 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.074 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.074 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.335 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:19:45.335 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:19:45.905 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.905 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.905 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.905 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.905 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.905 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.905 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.905 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:46.164 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:46.164 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.164 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.164 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:46.164 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:46.164 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.164 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.164 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.164 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.164 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.164 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.164 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.164 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.424 00:19:46.424 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.424 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.424 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.684 { 00:19:46.684 "cntlid": 19, 00:19:46.684 "qid": 0, 00:19:46.684 "state": "enabled", 00:19:46.684 "thread": "nvmf_tgt_poll_group_000", 00:19:46.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:46.684 "listen_address": { 00:19:46.684 "trtype": "TCP", 00:19:46.684 "adrfam": "IPv4", 00:19:46.684 "traddr": "10.0.0.2", 00:19:46.684 "trsvcid": "4420" 00:19:46.684 }, 00:19:46.684 "peer_address": { 00:19:46.684 "trtype": "TCP", 00:19:46.684 "adrfam": "IPv4", 00:19:46.684 "traddr": "10.0.0.1", 00:19:46.684 "trsvcid": "43954" 00:19:46.684 }, 00:19:46.684 "auth": { 00:19:46.684 "state": "completed", 00:19:46.684 "digest": "sha256", 00:19:46.684 "dhgroup": "ffdhe3072" 00:19:46.684 } 00:19:46.684 } 00:19:46.684 ]' 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.684 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.945 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:19:46.945 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:19:47.886 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.886 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:47.886 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.886 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.886 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.886 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.886 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.886 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.886 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:47.886 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.886 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.886 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:47.886 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:47.886 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.886 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.886 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.886 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.886 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.886 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.886 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.886 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.147 00:19:48.147 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.147 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.147 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.407 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.407 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.407 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.407 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.407 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.407 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.407 { 00:19:48.407 "cntlid": 21, 00:19:48.407 "qid": 0, 00:19:48.407 "state": "enabled", 00:19:48.407 "thread": "nvmf_tgt_poll_group_000", 00:19:48.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:48.407 "listen_address": { 00:19:48.407 "trtype": "TCP", 00:19:48.407 "adrfam": "IPv4", 00:19:48.407 "traddr": "10.0.0.2", 00:19:48.407 "trsvcid": "4420" 00:19:48.407 }, 00:19:48.407 "peer_address": { 00:19:48.407 "trtype": "TCP", 00:19:48.407 "adrfam": "IPv4", 00:19:48.407 "traddr": "10.0.0.1", 00:19:48.407 "trsvcid": "43960" 00:19:48.407 }, 00:19:48.407 "auth": { 00:19:48.407 "state": "completed", 00:19:48.407 "digest": "sha256", 00:19:48.407 "dhgroup": "ffdhe3072" 00:19:48.407 } 00:19:48.407 } 00:19:48.407 ]' 00:19:48.407 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.407 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.407 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.407 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:48.407 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.407 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.407 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.408 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.668 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:19:48.668 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:19:49.238 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.238 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:49.238 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.238 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:49.498 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.499 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.759 00:19:49.759 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.759 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.759 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.021 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.021 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.021 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.021 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.021 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.021 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.021 { 00:19:50.021 "cntlid": 23, 00:19:50.021 "qid": 0, 00:19:50.021 "state": "enabled", 00:19:50.021 "thread": "nvmf_tgt_poll_group_000", 00:19:50.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:50.021 "listen_address": { 00:19:50.021 "trtype": "TCP", 00:19:50.021 "adrfam": "IPv4", 00:19:50.021 "traddr": "10.0.0.2", 00:19:50.021 "trsvcid": "4420" 00:19:50.021 }, 00:19:50.021 "peer_address": { 00:19:50.021 "trtype": "TCP", 00:19:50.021 "adrfam": "IPv4", 00:19:50.021 "traddr": "10.0.0.1", 00:19:50.021 "trsvcid": "43986" 00:19:50.021 }, 00:19:50.021 "auth": { 00:19:50.021 "state": "completed", 00:19:50.021 "digest": "sha256", 00:19:50.021 "dhgroup": "ffdhe3072" 00:19:50.021 } 00:19:50.021 } 00:19:50.021 ]' 00:19:50.021 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.021 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.021 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.021 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.021 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.283 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.283 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.283 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.283 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:19:50.283 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:19:51.226 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.226 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:51.226 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.227 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.487 00:19:51.487 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.487 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.487 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.747 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.747 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.747 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.747 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.747 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.747 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.747 { 00:19:51.747 "cntlid": 25, 00:19:51.747 "qid": 0, 00:19:51.747 "state": "enabled", 00:19:51.747 "thread": "nvmf_tgt_poll_group_000", 00:19:51.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:51.747 "listen_address": { 00:19:51.747 "trtype": "TCP", 00:19:51.747 "adrfam": "IPv4", 00:19:51.747 "traddr": "10.0.0.2", 00:19:51.747 "trsvcid": "4420" 00:19:51.747 }, 00:19:51.747 "peer_address": { 00:19:51.747 "trtype": "TCP", 00:19:51.747 "adrfam": "IPv4", 00:19:51.747 "traddr": "10.0.0.1", 00:19:51.747 "trsvcid": "53692" 00:19:51.747 }, 00:19:51.747 "auth": { 00:19:51.747 "state": "completed", 00:19:51.747 "digest": "sha256", 00:19:51.747 "dhgroup": "ffdhe4096" 00:19:51.747 } 00:19:51.747 } 00:19:51.747 ]' 00:19:51.747 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.747 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.747 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.747 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:51.747 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.008 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.008 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.008 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.008 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:19:52.008 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.947 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.206 00:19:53.465 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.465 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.465 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.465 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.465 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.465 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.465 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.465 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.465 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.465 { 00:19:53.465 "cntlid": 27, 00:19:53.465 "qid": 0, 00:19:53.465 "state": "enabled", 00:19:53.465 "thread": "nvmf_tgt_poll_group_000", 00:19:53.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:53.465 "listen_address": { 00:19:53.465 "trtype": "TCP", 00:19:53.465 "adrfam": "IPv4", 00:19:53.465 "traddr": "10.0.0.2", 00:19:53.465 "trsvcid": "4420" 00:19:53.465 }, 00:19:53.465 "peer_address": { 00:19:53.465 "trtype": "TCP", 00:19:53.465 "adrfam": "IPv4", 00:19:53.465 "traddr": "10.0.0.1", 00:19:53.465 "trsvcid": "53712" 00:19:53.465 }, 00:19:53.465 "auth": { 00:19:53.465 "state": "completed", 00:19:53.465 "digest": "sha256", 00:19:53.465 "dhgroup": "ffdhe4096" 00:19:53.465 } 00:19:53.465 } 00:19:53.465 ]' 00:19:53.465 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.465 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.465 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.726 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.726 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.726 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.726 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.726 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.726 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:19:53.726 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:19:54.686 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.686 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:54.686 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.686 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.686 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.686 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.686 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.686 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.686 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:54.686 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.686 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.686 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:54.686 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:54.686 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.686 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.686 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.686 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.686 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.687 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.687 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.687 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.946 00:19:55.206 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.206 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.206 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.206 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.206 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.206 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.206 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.206 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.206 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.206 { 00:19:55.206 "cntlid": 29, 00:19:55.206 "qid": 0, 00:19:55.206 "state": "enabled", 00:19:55.206 "thread": "nvmf_tgt_poll_group_000", 00:19:55.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:55.206 "listen_address": { 00:19:55.206 "trtype": "TCP", 00:19:55.206 "adrfam": "IPv4", 00:19:55.206 "traddr": "10.0.0.2", 00:19:55.206 "trsvcid": "4420" 00:19:55.206 }, 00:19:55.206 "peer_address": { 00:19:55.206 "trtype": "TCP", 00:19:55.206 "adrfam": "IPv4", 00:19:55.206 "traddr": "10.0.0.1", 00:19:55.206 "trsvcid": "53736" 00:19:55.206 }, 00:19:55.206 "auth": { 00:19:55.206 "state": "completed", 00:19:55.206 "digest": "sha256", 00:19:55.206 "dhgroup": "ffdhe4096" 00:19:55.206 } 00:19:55.206 } 00:19:55.206 ]' 00:19:55.206 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.206 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.206 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.467 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:55.467 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.467 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.467 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.467 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.467 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:19:55.467 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:19:56.405 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.405 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:56.405 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.405 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.405 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.405 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.405 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.405 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.665 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:56.665 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.665 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.665 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:56.665 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:56.665 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.665 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:56.665 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.665 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.665 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.665 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:56.665 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.665 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.925 00:19:56.925 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.925 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.925 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.925 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.925 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.925 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.925 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.925 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.925 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.925 { 00:19:56.925 "cntlid": 31, 00:19:56.925 "qid": 0, 00:19:56.925 "state": "enabled", 00:19:56.925 "thread": "nvmf_tgt_poll_group_000", 00:19:56.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:56.925 "listen_address": { 00:19:56.925 "trtype": "TCP", 00:19:56.925 "adrfam": "IPv4", 00:19:56.925 "traddr": "10.0.0.2", 00:19:56.925 "trsvcid": "4420" 00:19:56.925 }, 00:19:56.925 "peer_address": { 00:19:56.925 "trtype": "TCP", 00:19:56.925 "adrfam": "IPv4", 00:19:56.925 "traddr": "10.0.0.1", 00:19:56.925 "trsvcid": "53776" 00:19:56.925 }, 00:19:56.925 "auth": { 00:19:56.925 "state": "completed", 00:19:56.925 "digest": "sha256", 00:19:56.925 "dhgroup": "ffdhe4096" 00:19:56.925 } 00:19:56.925 } 00:19:56.925 ]' 00:19:56.925 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.185 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.185 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.185 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:57.185 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.185 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.185 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.185 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.444 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:19:57.444 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:19:58.013 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.013 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:58.013 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.013 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.013 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.013 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.013 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.013 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.013 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.273 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:58.273 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.273 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.273 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:58.273 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:58.273 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.273 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.273 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.273 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.273 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.274 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.274 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.274 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.533 00:19:58.792 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.792 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.792 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.792 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.792 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.792 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.792 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.792 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.792 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.792 { 00:19:58.792 "cntlid": 33, 00:19:58.792 "qid": 0, 00:19:58.792 "state": "enabled", 00:19:58.792 "thread": "nvmf_tgt_poll_group_000", 00:19:58.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:58.792 "listen_address": { 00:19:58.792 "trtype": "TCP", 00:19:58.792 "adrfam": "IPv4", 00:19:58.792 "traddr": "10.0.0.2", 00:19:58.792 "trsvcid": "4420" 00:19:58.792 }, 00:19:58.792 "peer_address": { 00:19:58.792 "trtype": "TCP", 00:19:58.792 "adrfam": "IPv4", 00:19:58.792 "traddr": "10.0.0.1", 00:19:58.792 "trsvcid": "53798" 00:19:58.792 }, 00:19:58.792 "auth": { 00:19:58.792 "state": "completed", 00:19:58.792 "digest": "sha256", 00:19:58.792 "dhgroup": "ffdhe6144" 00:19:58.792 } 00:19:58.792 } 00:19:58.792 ]' 00:19:58.792 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.792 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.792 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.053 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:59.053 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.053 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.053 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.053 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.313 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:19:59.314 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:19:59.884 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.884 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.885 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.885 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.885 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.885 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.885 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.885 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.145 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:00.145 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.145 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.145 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:00.145 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:00.145 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.145 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.145 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.145 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.145 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.146 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.146 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.146 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.406 00:20:00.406 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.406 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.406 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.667 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.667 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.667 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.667 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.667 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.667 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.667 { 00:20:00.667 "cntlid": 35, 00:20:00.667 "qid": 0, 00:20:00.667 "state": "enabled", 00:20:00.667 "thread": "nvmf_tgt_poll_group_000", 00:20:00.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:00.667 "listen_address": { 00:20:00.667 "trtype": "TCP", 00:20:00.667 "adrfam": "IPv4", 00:20:00.667 "traddr": "10.0.0.2", 00:20:00.667 "trsvcid": "4420" 00:20:00.667 }, 00:20:00.667 "peer_address": { 00:20:00.667 "trtype": "TCP", 00:20:00.667 "adrfam": "IPv4", 00:20:00.667 "traddr": "10.0.0.1", 00:20:00.667 "trsvcid": "53836" 00:20:00.667 }, 00:20:00.667 "auth": { 00:20:00.667 "state": "completed", 00:20:00.667 "digest": "sha256", 00:20:00.667 "dhgroup": "ffdhe6144" 00:20:00.667 } 00:20:00.667 } 00:20:00.667 ]' 00:20:00.667 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.667 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.667 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.667 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.667 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.667 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.667 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.667 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.928 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:00.928 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:01.871 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.871 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:01.871 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.871 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.871 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.871 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.871 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.871 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.871 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:01.871 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.871 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.871 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:01.871 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:01.871 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.871 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.871 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.871 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.871 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.871 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.871 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.871 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.131 00:20:02.131 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.131 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.131 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.392 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.392 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.392 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.392 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.392 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.392 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.392 { 00:20:02.392 "cntlid": 37, 00:20:02.392 "qid": 0, 00:20:02.392 "state": "enabled", 00:20:02.392 "thread": "nvmf_tgt_poll_group_000", 00:20:02.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:02.392 "listen_address": { 00:20:02.392 "trtype": "TCP", 00:20:02.392 "adrfam": "IPv4", 00:20:02.392 "traddr": "10.0.0.2", 00:20:02.392 "trsvcid": "4420" 00:20:02.392 }, 00:20:02.392 "peer_address": { 00:20:02.392 "trtype": "TCP", 00:20:02.392 "adrfam": "IPv4", 00:20:02.392 "traddr": "10.0.0.1", 00:20:02.392 "trsvcid": "32892" 00:20:02.392 }, 00:20:02.392 "auth": { 00:20:02.392 "state": "completed", 00:20:02.392 "digest": "sha256", 00:20:02.392 "dhgroup": "ffdhe6144" 00:20:02.392 } 00:20:02.392 } 00:20:02.392 ]' 00:20:02.392 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.392 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.392 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.654 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.654 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.654 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.654 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.654 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.654 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:02.654 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.599 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.172 00:20:04.172 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.172 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.172 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.172 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.172 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.172 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.172 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.172 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.172 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.172 { 00:20:04.172 "cntlid": 39, 00:20:04.172 "qid": 0, 00:20:04.172 "state": "enabled", 00:20:04.172 "thread": "nvmf_tgt_poll_group_000", 00:20:04.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:04.172 "listen_address": { 00:20:04.172 "trtype": "TCP", 00:20:04.172 "adrfam": "IPv4", 00:20:04.172 "traddr": "10.0.0.2", 00:20:04.172 "trsvcid": "4420" 00:20:04.172 }, 00:20:04.172 "peer_address": { 00:20:04.172 "trtype": "TCP", 00:20:04.172 "adrfam": "IPv4", 00:20:04.172 "traddr": "10.0.0.1", 00:20:04.172 "trsvcid": "32932" 00:20:04.172 }, 00:20:04.172 "auth": { 00:20:04.172 "state": "completed", 00:20:04.172 "digest": "sha256", 00:20:04.172 "dhgroup": "ffdhe6144" 00:20:04.172 } 00:20:04.172 } 00:20:04.172 ]' 00:20:04.172 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.433 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.433 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.433 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.433 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.433 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.433 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.433 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.695 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:04.695 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:05.267 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.267 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:05.267 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.267 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.267 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.267 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.267 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.267 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.267 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.526 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:05.526 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.527 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.527 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:05.527 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:05.527 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.527 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.527 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.527 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.527 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.527 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.527 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.527 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.097 00:20:06.097 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.097 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.097 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.365 { 00:20:06.365 "cntlid": 41, 00:20:06.365 "qid": 0, 00:20:06.365 "state": "enabled", 00:20:06.365 "thread": "nvmf_tgt_poll_group_000", 00:20:06.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:06.365 "listen_address": { 00:20:06.365 "trtype": "TCP", 00:20:06.365 "adrfam": "IPv4", 00:20:06.365 "traddr": "10.0.0.2", 00:20:06.365 "trsvcid": "4420" 00:20:06.365 }, 00:20:06.365 "peer_address": { 00:20:06.365 "trtype": "TCP", 00:20:06.365 "adrfam": "IPv4", 00:20:06.365 "traddr": "10.0.0.1", 00:20:06.365 "trsvcid": "32966" 00:20:06.365 }, 00:20:06.365 "auth": { 00:20:06.365 "state": "completed", 00:20:06.365 "digest": "sha256", 00:20:06.365 "dhgroup": "ffdhe8192" 00:20:06.365 } 00:20:06.365 } 00:20:06.365 ]' 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.365 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.625 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:06.625 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:07.195 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.195 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:07.195 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.195 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.195 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.195 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.195 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.196 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.456 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:07.456 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.456 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.456 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:07.456 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:07.456 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.456 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.456 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.456 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.456 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.456 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.456 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.456 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.028 00:20:08.028 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.028 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.028 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.289 { 00:20:08.289 "cntlid": 43, 00:20:08.289 "qid": 0, 00:20:08.289 "state": "enabled", 00:20:08.289 "thread": "nvmf_tgt_poll_group_000", 00:20:08.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:08.289 "listen_address": { 00:20:08.289 "trtype": "TCP", 00:20:08.289 "adrfam": "IPv4", 00:20:08.289 "traddr": "10.0.0.2", 00:20:08.289 "trsvcid": "4420" 00:20:08.289 }, 00:20:08.289 "peer_address": { 00:20:08.289 "trtype": "TCP", 00:20:08.289 "adrfam": "IPv4", 00:20:08.289 "traddr": "10.0.0.1", 00:20:08.289 "trsvcid": "32992" 00:20:08.289 }, 00:20:08.289 "auth": { 00:20:08.289 "state": "completed", 00:20:08.289 "digest": "sha256", 00:20:08.289 "dhgroup": "ffdhe8192" 00:20:08.289 } 00:20:08.289 } 00:20:08.289 ]' 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.289 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.550 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:08.550 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:09.123 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.123 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.123 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.123 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.123 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.123 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.123 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.123 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.385 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:09.385 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.385 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.385 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:09.385 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:09.385 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.385 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.385 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.385 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.385 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.385 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.385 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.385 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.976 00:20:09.976 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.976 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.976 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.237 { 00:20:10.237 "cntlid": 45, 00:20:10.237 "qid": 0, 00:20:10.237 "state": "enabled", 00:20:10.237 "thread": "nvmf_tgt_poll_group_000", 00:20:10.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:10.237 "listen_address": { 00:20:10.237 "trtype": "TCP", 00:20:10.237 "adrfam": "IPv4", 00:20:10.237 "traddr": "10.0.0.2", 00:20:10.237 "trsvcid": "4420" 00:20:10.237 }, 00:20:10.237 "peer_address": { 00:20:10.237 "trtype": "TCP", 00:20:10.237 "adrfam": "IPv4", 00:20:10.237 "traddr": "10.0.0.1", 00:20:10.237 "trsvcid": "33036" 00:20:10.237 }, 00:20:10.237 "auth": { 00:20:10.237 "state": "completed", 00:20:10.237 "digest": "sha256", 00:20:10.237 "dhgroup": "ffdhe8192" 00:20:10.237 } 00:20:10.237 } 00:20:10.237 ]' 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.237 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.497 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:10.497 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:11.069 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.069 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:11.069 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.069 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.069 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.069 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.069 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:11.069 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:11.332 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:11.332 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.332 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:11.332 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:11.332 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.332 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.332 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:11.332 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.332 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.332 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.332 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.332 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.332 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.905 00:20:11.905 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.905 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.905 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.905 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.905 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.905 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.905 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.905 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.905 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.905 { 00:20:11.905 "cntlid": 47, 00:20:11.905 "qid": 0, 00:20:11.905 "state": "enabled", 00:20:11.905 "thread": "nvmf_tgt_poll_group_000", 00:20:11.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:11.905 "listen_address": { 00:20:11.905 "trtype": "TCP", 00:20:11.905 "adrfam": "IPv4", 00:20:11.905 "traddr": "10.0.0.2", 00:20:11.905 "trsvcid": "4420" 00:20:11.905 }, 00:20:11.905 "peer_address": { 00:20:11.905 "trtype": "TCP", 00:20:11.905 "adrfam": "IPv4", 00:20:11.905 "traddr": "10.0.0.1", 00:20:11.905 "trsvcid": "44992" 00:20:11.905 }, 00:20:11.905 "auth": { 00:20:11.905 "state": "completed", 00:20:11.905 "digest": "sha256", 00:20:11.905 "dhgroup": "ffdhe8192" 00:20:11.905 } 00:20:11.905 } 00:20:11.905 ]' 00:20:11.905 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.167 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.167 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.167 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:12.167 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.167 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.167 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.167 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.428 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:12.428 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:13.015 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.015 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:13.015 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.015 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.015 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.015 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:13.015 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.015 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.015 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:13.015 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:13.276 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:13.276 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.276 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.276 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:13.276 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.276 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.276 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.276 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.276 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.276 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.276 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.276 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.276 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.537 00:20:13.537 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.537 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.537 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.798 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.798 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.798 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.798 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.798 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.798 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.798 { 00:20:13.798 "cntlid": 49, 00:20:13.798 "qid": 0, 00:20:13.798 "state": "enabled", 00:20:13.798 "thread": "nvmf_tgt_poll_group_000", 00:20:13.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:13.798 "listen_address": { 00:20:13.798 "trtype": "TCP", 00:20:13.798 "adrfam": "IPv4", 00:20:13.798 "traddr": "10.0.0.2", 00:20:13.798 "trsvcid": "4420" 00:20:13.798 }, 00:20:13.798 "peer_address": { 00:20:13.798 "trtype": "TCP", 00:20:13.798 "adrfam": "IPv4", 00:20:13.798 "traddr": "10.0.0.1", 00:20:13.798 "trsvcid": "45016" 00:20:13.798 }, 00:20:13.798 "auth": { 00:20:13.798 "state": "completed", 00:20:13.798 "digest": "sha384", 00:20:13.798 "dhgroup": "null" 00:20:13.798 } 00:20:13.798 } 00:20:13.798 ]' 00:20:13.799 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.799 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.799 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.799 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:13.799 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.799 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.799 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.799 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.058 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:14.058 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.999 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.000 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.000 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.000 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.000 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.000 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.000 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.000 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.260 00:20:15.260 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.260 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.260 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.520 { 00:20:15.520 "cntlid": 51, 00:20:15.520 "qid": 0, 00:20:15.520 "state": "enabled", 00:20:15.520 "thread": "nvmf_tgt_poll_group_000", 00:20:15.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:15.520 "listen_address": { 00:20:15.520 "trtype": "TCP", 00:20:15.520 "adrfam": "IPv4", 00:20:15.520 "traddr": "10.0.0.2", 00:20:15.520 "trsvcid": "4420" 00:20:15.520 }, 00:20:15.520 "peer_address": { 00:20:15.520 "trtype": "TCP", 00:20:15.520 "adrfam": "IPv4", 00:20:15.520 "traddr": "10.0.0.1", 00:20:15.520 "trsvcid": "45052" 00:20:15.520 }, 00:20:15.520 "auth": { 00:20:15.520 "state": "completed", 00:20:15.520 "digest": "sha384", 00:20:15.520 "dhgroup": "null" 00:20:15.520 } 00:20:15.520 } 00:20:15.520 ]' 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.520 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.779 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:15.779 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.728 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.987 00:20:16.988 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.988 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.988 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.248 { 00:20:17.248 "cntlid": 53, 00:20:17.248 "qid": 0, 00:20:17.248 "state": "enabled", 00:20:17.248 "thread": "nvmf_tgt_poll_group_000", 00:20:17.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:17.248 "listen_address": { 00:20:17.248 "trtype": "TCP", 00:20:17.248 "adrfam": "IPv4", 00:20:17.248 "traddr": "10.0.0.2", 00:20:17.248 "trsvcid": "4420" 00:20:17.248 }, 00:20:17.248 "peer_address": { 00:20:17.248 "trtype": "TCP", 00:20:17.248 "adrfam": "IPv4", 00:20:17.248 "traddr": "10.0.0.1", 00:20:17.248 "trsvcid": "45066" 00:20:17.248 }, 00:20:17.248 "auth": { 00:20:17.248 "state": "completed", 00:20:17.248 "digest": "sha384", 00:20:17.248 "dhgroup": "null" 00:20:17.248 } 00:20:17.248 } 00:20:17.248 ]' 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.248 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.509 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:17.509 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:18.079 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.340 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.601 00:20:18.601 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.601 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.601 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.862 { 00:20:18.862 "cntlid": 55, 00:20:18.862 "qid": 0, 00:20:18.862 "state": "enabled", 00:20:18.862 "thread": "nvmf_tgt_poll_group_000", 00:20:18.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:18.862 "listen_address": { 00:20:18.862 "trtype": "TCP", 00:20:18.862 "adrfam": "IPv4", 00:20:18.862 "traddr": "10.0.0.2", 00:20:18.862 "trsvcid": "4420" 00:20:18.862 }, 00:20:18.862 "peer_address": { 00:20:18.862 "trtype": "TCP", 00:20:18.862 "adrfam": "IPv4", 00:20:18.862 "traddr": "10.0.0.1", 00:20:18.862 "trsvcid": "45094" 00:20:18.862 }, 00:20:18.862 "auth": { 00:20:18.862 "state": "completed", 00:20:18.862 "digest": "sha384", 00:20:18.862 "dhgroup": "null" 00:20:18.862 } 00:20:18.862 } 00:20:18.862 ]' 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.862 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.123 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:19.123 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:19.694 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.694 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:19.694 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.694 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.694 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.694 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.694 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.694 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:19.694 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:19.955 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:19.955 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.955 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.955 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:19.955 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.955 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.955 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.955 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.955 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.955 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.955 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.955 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.955 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.216 00:20:20.216 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.216 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.216 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.491 { 00:20:20.491 "cntlid": 57, 00:20:20.491 "qid": 0, 00:20:20.491 "state": "enabled", 00:20:20.491 "thread": "nvmf_tgt_poll_group_000", 00:20:20.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:20.491 "listen_address": { 00:20:20.491 "trtype": "TCP", 00:20:20.491 "adrfam": "IPv4", 00:20:20.491 "traddr": "10.0.0.2", 00:20:20.491 "trsvcid": "4420" 00:20:20.491 }, 00:20:20.491 "peer_address": { 00:20:20.491 "trtype": "TCP", 00:20:20.491 "adrfam": "IPv4", 00:20:20.491 "traddr": "10.0.0.1", 00:20:20.491 "trsvcid": "45120" 00:20:20.491 }, 00:20:20.491 "auth": { 00:20:20.491 "state": "completed", 00:20:20.491 "digest": "sha384", 00:20:20.491 "dhgroup": "ffdhe2048" 00:20:20.491 } 00:20:20.491 } 00:20:20.491 ]' 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.491 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.805 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:20.805 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:21.446 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.446 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:21.446 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.446 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.446 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.446 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.446 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.446 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.706 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:21.706 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.706 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.706 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:21.706 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.706 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.706 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.706 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.706 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.706 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.706 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.706 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.706 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.706 00:20:21.968 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.968 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.968 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.968 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.968 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.968 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.968 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.968 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.968 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.968 { 00:20:21.968 "cntlid": 59, 00:20:21.968 "qid": 0, 00:20:21.968 "state": "enabled", 00:20:21.968 "thread": "nvmf_tgt_poll_group_000", 00:20:21.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:21.968 "listen_address": { 00:20:21.968 "trtype": "TCP", 00:20:21.968 "adrfam": "IPv4", 00:20:21.968 "traddr": "10.0.0.2", 00:20:21.968 "trsvcid": "4420" 00:20:21.968 }, 00:20:21.968 "peer_address": { 00:20:21.968 "trtype": "TCP", 00:20:21.968 "adrfam": "IPv4", 00:20:21.968 "traddr": "10.0.0.1", 00:20:21.968 "trsvcid": "45828" 00:20:21.968 }, 00:20:21.968 "auth": { 00:20:21.968 "state": "completed", 00:20:21.968 "digest": "sha384", 00:20:21.968 "dhgroup": "ffdhe2048" 00:20:21.968 } 00:20:21.968 } 00:20:21.968 ]' 00:20:21.968 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.968 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.968 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.229 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:22.229 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.229 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.229 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.229 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.229 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:22.229 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.173 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.433 00:20:23.433 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.433 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.433 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.694 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.694 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.694 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.694 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.694 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.694 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.694 { 00:20:23.694 "cntlid": 61, 00:20:23.694 "qid": 0, 00:20:23.694 "state": "enabled", 00:20:23.694 "thread": "nvmf_tgt_poll_group_000", 00:20:23.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:23.694 "listen_address": { 00:20:23.694 "trtype": "TCP", 00:20:23.694 "adrfam": "IPv4", 00:20:23.694 "traddr": "10.0.0.2", 00:20:23.694 "trsvcid": "4420" 00:20:23.694 }, 00:20:23.694 "peer_address": { 00:20:23.694 "trtype": "TCP", 00:20:23.694 "adrfam": "IPv4", 00:20:23.694 "traddr": "10.0.0.1", 00:20:23.694 "trsvcid": "45850" 00:20:23.694 }, 00:20:23.694 "auth": { 00:20:23.694 "state": "completed", 00:20:23.694 "digest": "sha384", 00:20:23.694 "dhgroup": "ffdhe2048" 00:20:23.694 } 00:20:23.694 } 00:20:23.694 ]' 00:20:23.694 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.694 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.694 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.694 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.694 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.956 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.956 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.956 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.956 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:23.956 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:24.899 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.899 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.160 00:20:25.160 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.160 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.160 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.422 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.423 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.423 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.423 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.423 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.423 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.423 { 00:20:25.423 "cntlid": 63, 00:20:25.423 "qid": 0, 00:20:25.423 "state": "enabled", 00:20:25.423 "thread": "nvmf_tgt_poll_group_000", 00:20:25.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:25.423 "listen_address": { 00:20:25.423 "trtype": "TCP", 00:20:25.423 "adrfam": "IPv4", 00:20:25.423 "traddr": "10.0.0.2", 00:20:25.423 "trsvcid": "4420" 00:20:25.423 }, 00:20:25.423 "peer_address": { 00:20:25.423 "trtype": "TCP", 00:20:25.423 "adrfam": "IPv4", 00:20:25.423 "traddr": "10.0.0.1", 00:20:25.423 "trsvcid": "45874" 00:20:25.423 }, 00:20:25.423 "auth": { 00:20:25.423 "state": "completed", 00:20:25.423 "digest": "sha384", 00:20:25.423 "dhgroup": "ffdhe2048" 00:20:25.423 } 00:20:25.423 } 00:20:25.423 ]' 00:20:25.423 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.423 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.423 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.423 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:25.423 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.423 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.423 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.423 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.684 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:25.684 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.624 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.886 00:20:26.886 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.886 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.886 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.146 { 00:20:27.146 "cntlid": 65, 00:20:27.146 "qid": 0, 00:20:27.146 "state": "enabled", 00:20:27.146 "thread": "nvmf_tgt_poll_group_000", 00:20:27.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:27.146 "listen_address": { 00:20:27.146 "trtype": "TCP", 00:20:27.146 "adrfam": "IPv4", 00:20:27.146 "traddr": "10.0.0.2", 00:20:27.146 "trsvcid": "4420" 00:20:27.146 }, 00:20:27.146 "peer_address": { 00:20:27.146 "trtype": "TCP", 00:20:27.146 "adrfam": "IPv4", 00:20:27.146 "traddr": "10.0.0.1", 00:20:27.146 "trsvcid": "45900" 00:20:27.146 }, 00:20:27.146 "auth": { 00:20:27.146 "state": "completed", 00:20:27.146 "digest": "sha384", 00:20:27.146 "dhgroup": "ffdhe3072" 00:20:27.146 } 00:20:27.146 } 00:20:27.146 ]' 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.146 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.406 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:27.406 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.348 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.609 00:20:28.609 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.609 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.609 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.870 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.870 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.870 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.870 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.870 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.870 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.870 { 00:20:28.870 "cntlid": 67, 00:20:28.870 "qid": 0, 00:20:28.870 "state": "enabled", 00:20:28.870 "thread": "nvmf_tgt_poll_group_000", 00:20:28.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:28.870 "listen_address": { 00:20:28.870 "trtype": "TCP", 00:20:28.870 "adrfam": "IPv4", 00:20:28.870 "traddr": "10.0.0.2", 00:20:28.870 "trsvcid": "4420" 00:20:28.870 }, 00:20:28.870 "peer_address": { 00:20:28.870 "trtype": "TCP", 00:20:28.870 "adrfam": "IPv4", 00:20:28.870 "traddr": "10.0.0.1", 00:20:28.870 "trsvcid": "45924" 00:20:28.870 }, 00:20:28.870 "auth": { 00:20:28.870 "state": "completed", 00:20:28.870 "digest": "sha384", 00:20:28.870 "dhgroup": "ffdhe3072" 00:20:28.870 } 00:20:28.870 } 00:20:28.870 ]' 00:20:28.870 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.870 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.870 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.870 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.870 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.870 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.870 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.870 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.131 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:29.131 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:29.703 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.963 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:29.963 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.963 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.963 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.963 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.963 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.963 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.963 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:29.963 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.963 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.963 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:29.963 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:29.963 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.964 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.964 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.964 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.964 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.964 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.964 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.964 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.225 00:20:30.225 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.225 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.225 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.487 { 00:20:30.487 "cntlid": 69, 00:20:30.487 "qid": 0, 00:20:30.487 "state": "enabled", 00:20:30.487 "thread": "nvmf_tgt_poll_group_000", 00:20:30.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:30.487 "listen_address": { 00:20:30.487 "trtype": "TCP", 00:20:30.487 "adrfam": "IPv4", 00:20:30.487 "traddr": "10.0.0.2", 00:20:30.487 "trsvcid": "4420" 00:20:30.487 }, 00:20:30.487 "peer_address": { 00:20:30.487 "trtype": "TCP", 00:20:30.487 "adrfam": "IPv4", 00:20:30.487 "traddr": "10.0.0.1", 00:20:30.487 "trsvcid": "45962" 00:20:30.487 }, 00:20:30.487 "auth": { 00:20:30.487 "state": "completed", 00:20:30.487 "digest": "sha384", 00:20:30.487 "dhgroup": "ffdhe3072" 00:20:30.487 } 00:20:30.487 } 00:20:30.487 ]' 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.487 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.748 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:30.748 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.686 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.946 00:20:31.946 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.946 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.946 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.207 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.207 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.207 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.207 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.208 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.208 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.208 { 00:20:32.208 "cntlid": 71, 00:20:32.208 "qid": 0, 00:20:32.208 "state": "enabled", 00:20:32.208 "thread": "nvmf_tgt_poll_group_000", 00:20:32.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:32.208 "listen_address": { 00:20:32.208 "trtype": "TCP", 00:20:32.208 "adrfam": "IPv4", 00:20:32.208 "traddr": "10.0.0.2", 00:20:32.208 "trsvcid": "4420" 00:20:32.208 }, 00:20:32.208 "peer_address": { 00:20:32.208 "trtype": "TCP", 00:20:32.208 "adrfam": "IPv4", 00:20:32.208 "traddr": "10.0.0.1", 00:20:32.208 "trsvcid": "58782" 00:20:32.208 }, 00:20:32.208 "auth": { 00:20:32.208 "state": "completed", 00:20:32.208 "digest": "sha384", 00:20:32.208 "dhgroup": "ffdhe3072" 00:20:32.208 } 00:20:32.208 } 00:20:32.208 ]' 00:20:32.208 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.208 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.208 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.208 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.208 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.208 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.208 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.208 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.469 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:32.469 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.413 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.414 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.673 00:20:33.673 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.673 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.673 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.933 { 00:20:33.933 "cntlid": 73, 00:20:33.933 "qid": 0, 00:20:33.933 "state": "enabled", 00:20:33.933 "thread": "nvmf_tgt_poll_group_000", 00:20:33.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:33.933 "listen_address": { 00:20:33.933 "trtype": "TCP", 00:20:33.933 "adrfam": "IPv4", 00:20:33.933 "traddr": "10.0.0.2", 00:20:33.933 "trsvcid": "4420" 00:20:33.933 }, 00:20:33.933 "peer_address": { 00:20:33.933 "trtype": "TCP", 00:20:33.933 "adrfam": "IPv4", 00:20:33.933 "traddr": "10.0.0.1", 00:20:33.933 "trsvcid": "58812" 00:20:33.933 }, 00:20:33.933 "auth": { 00:20:33.933 "state": "completed", 00:20:33.933 "digest": "sha384", 00:20:33.933 "dhgroup": "ffdhe4096" 00:20:33.933 } 00:20:33.933 } 00:20:33.933 ]' 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.933 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.191 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:34.191 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.128 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.387 00:20:35.387 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.387 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.387 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.647 { 00:20:35.647 "cntlid": 75, 00:20:35.647 "qid": 0, 00:20:35.647 "state": "enabled", 00:20:35.647 "thread": "nvmf_tgt_poll_group_000", 00:20:35.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:35.647 "listen_address": { 00:20:35.647 "trtype": "TCP", 00:20:35.647 "adrfam": "IPv4", 00:20:35.647 "traddr": "10.0.0.2", 00:20:35.647 "trsvcid": "4420" 00:20:35.647 }, 00:20:35.647 "peer_address": { 00:20:35.647 "trtype": "TCP", 00:20:35.647 "adrfam": "IPv4", 00:20:35.647 "traddr": "10.0.0.1", 00:20:35.647 "trsvcid": "58842" 00:20:35.647 }, 00:20:35.647 "auth": { 00:20:35.647 "state": "completed", 00:20:35.647 "digest": "sha384", 00:20:35.647 "dhgroup": "ffdhe4096" 00:20:35.647 } 00:20:35.647 } 00:20:35.647 ]' 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.647 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.911 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:35.911 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:36.849 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.849 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:36.849 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.849 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.849 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.849 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.849 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.849 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.849 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:36.849 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.849 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.849 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:36.849 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:36.849 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.849 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.849 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.849 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.849 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.849 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.849 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.849 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.109 00:20:37.109 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.109 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.109 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.369 { 00:20:37.369 "cntlid": 77, 00:20:37.369 "qid": 0, 00:20:37.369 "state": "enabled", 00:20:37.369 "thread": "nvmf_tgt_poll_group_000", 00:20:37.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:37.369 "listen_address": { 00:20:37.369 "trtype": "TCP", 00:20:37.369 "adrfam": "IPv4", 00:20:37.369 "traddr": "10.0.0.2", 00:20:37.369 "trsvcid": "4420" 00:20:37.369 }, 00:20:37.369 "peer_address": { 00:20:37.369 "trtype": "TCP", 00:20:37.369 "adrfam": "IPv4", 00:20:37.369 "traddr": "10.0.0.1", 00:20:37.369 "trsvcid": "58864" 00:20:37.369 }, 00:20:37.369 "auth": { 00:20:37.369 "state": "completed", 00:20:37.369 "digest": "sha384", 00:20:37.369 "dhgroup": "ffdhe4096" 00:20:37.369 } 00:20:37.369 } 00:20:37.369 ]' 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.369 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.630 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:37.630 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:38.572 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.573 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.833 00:20:38.833 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.833 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.833 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.094 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.094 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.094 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.094 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.094 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.094 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.094 { 00:20:39.094 "cntlid": 79, 00:20:39.094 "qid": 0, 00:20:39.094 "state": "enabled", 00:20:39.094 "thread": "nvmf_tgt_poll_group_000", 00:20:39.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:39.094 "listen_address": { 00:20:39.094 "trtype": "TCP", 00:20:39.094 "adrfam": "IPv4", 00:20:39.094 "traddr": "10.0.0.2", 00:20:39.094 "trsvcid": "4420" 00:20:39.094 }, 00:20:39.094 "peer_address": { 00:20:39.094 "trtype": "TCP", 00:20:39.094 "adrfam": "IPv4", 00:20:39.094 "traddr": "10.0.0.1", 00:20:39.094 "trsvcid": "58884" 00:20:39.094 }, 00:20:39.094 "auth": { 00:20:39.094 "state": "completed", 00:20:39.094 "digest": "sha384", 00:20:39.094 "dhgroup": "ffdhe4096" 00:20:39.094 } 00:20:39.094 } 00:20:39.094 ]' 00:20:39.094 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.094 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.094 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.094 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.094 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.356 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.356 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.356 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.356 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:39.356 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.297 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.869 00:20:40.869 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.869 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.869 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.869 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.869 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.869 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.869 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.869 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.869 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.869 { 00:20:40.869 "cntlid": 81, 00:20:40.869 "qid": 0, 00:20:40.869 "state": "enabled", 00:20:40.869 "thread": "nvmf_tgt_poll_group_000", 00:20:40.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:40.869 "listen_address": { 00:20:40.869 "trtype": "TCP", 00:20:40.869 "adrfam": "IPv4", 00:20:40.869 "traddr": "10.0.0.2", 00:20:40.869 "trsvcid": "4420" 00:20:40.869 }, 00:20:40.869 "peer_address": { 00:20:40.869 "trtype": "TCP", 00:20:40.869 "adrfam": "IPv4", 00:20:40.869 "traddr": "10.0.0.1", 00:20:40.869 "trsvcid": "58904" 00:20:40.869 }, 00:20:40.869 "auth": { 00:20:40.869 "state": "completed", 00:20:40.869 "digest": "sha384", 00:20:40.869 "dhgroup": "ffdhe6144" 00:20:40.869 } 00:20:40.869 } 00:20:40.869 ]' 00:20:40.869 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.869 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.869 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.130 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:41.130 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.130 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.130 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.130 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.130 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:41.130 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.072 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.333 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.333 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.333 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.333 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.593 00:20:42.593 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.593 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.593 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.854 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.854 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.854 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.854 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.854 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.854 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.854 { 00:20:42.854 "cntlid": 83, 00:20:42.854 "qid": 0, 00:20:42.854 "state": "enabled", 00:20:42.854 "thread": "nvmf_tgt_poll_group_000", 00:20:42.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:42.854 "listen_address": { 00:20:42.854 "trtype": "TCP", 00:20:42.854 "adrfam": "IPv4", 00:20:42.854 "traddr": "10.0.0.2", 00:20:42.854 "trsvcid": "4420" 00:20:42.854 }, 00:20:42.854 "peer_address": { 00:20:42.854 "trtype": "TCP", 00:20:42.854 "adrfam": "IPv4", 00:20:42.854 "traddr": "10.0.0.1", 00:20:42.854 "trsvcid": "39954" 00:20:42.854 }, 00:20:42.854 "auth": { 00:20:42.854 "state": "completed", 00:20:42.854 "digest": "sha384", 00:20:42.854 "dhgroup": "ffdhe6144" 00:20:42.854 } 00:20:42.854 } 00:20:42.854 ]' 00:20:42.854 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.854 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.854 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.854 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.854 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.854 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.854 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.854 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.115 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:43.115 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:43.688 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.688 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:43.688 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.688 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.688 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.688 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.688 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.688 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.949 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:43.949 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.949 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.949 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:43.949 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.949 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.949 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.949 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.949 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.949 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.949 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.949 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.949 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.210 00:20:44.471 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.471 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.471 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.471 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.471 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.471 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.471 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.471 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.471 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.471 { 00:20:44.471 "cntlid": 85, 00:20:44.471 "qid": 0, 00:20:44.471 "state": "enabled", 00:20:44.471 "thread": "nvmf_tgt_poll_group_000", 00:20:44.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:44.471 "listen_address": { 00:20:44.471 "trtype": "TCP", 00:20:44.471 "adrfam": "IPv4", 00:20:44.471 "traddr": "10.0.0.2", 00:20:44.471 "trsvcid": "4420" 00:20:44.471 }, 00:20:44.471 "peer_address": { 00:20:44.471 "trtype": "TCP", 00:20:44.471 "adrfam": "IPv4", 00:20:44.471 "traddr": "10.0.0.1", 00:20:44.471 "trsvcid": "39972" 00:20:44.471 }, 00:20:44.471 "auth": { 00:20:44.471 "state": "completed", 00:20:44.471 "digest": "sha384", 00:20:44.471 "dhgroup": "ffdhe6144" 00:20:44.471 } 00:20:44.471 } 00:20:44.471 ]' 00:20:44.471 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.471 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.733 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.733 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.733 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.733 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.733 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.733 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.994 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:44.994 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:45.568 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.568 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:45.568 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.568 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.568 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.568 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.568 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.568 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.828 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:45.828 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.828 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.828 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:45.828 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.828 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.828 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:45.828 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.828 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.828 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.828 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.829 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.829 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.089 00:20:46.089 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.089 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.089 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.349 { 00:20:46.349 "cntlid": 87, 00:20:46.349 "qid": 0, 00:20:46.349 "state": "enabled", 00:20:46.349 "thread": "nvmf_tgt_poll_group_000", 00:20:46.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:46.349 "listen_address": { 00:20:46.349 "trtype": "TCP", 00:20:46.349 "adrfam": "IPv4", 00:20:46.349 "traddr": "10.0.0.2", 00:20:46.349 "trsvcid": "4420" 00:20:46.349 }, 00:20:46.349 "peer_address": { 00:20:46.349 "trtype": "TCP", 00:20:46.349 "adrfam": "IPv4", 00:20:46.349 "traddr": "10.0.0.1", 00:20:46.349 "trsvcid": "40000" 00:20:46.349 }, 00:20:46.349 "auth": { 00:20:46.349 "state": "completed", 00:20:46.349 "digest": "sha384", 00:20:46.349 "dhgroup": "ffdhe6144" 00:20:46.349 } 00:20:46.349 } 00:20:46.349 ]' 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.349 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.609 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:46.609 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.549 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.119 00:20:48.119 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.119 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.119 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.379 { 00:20:48.379 "cntlid": 89, 00:20:48.379 "qid": 0, 00:20:48.379 "state": "enabled", 00:20:48.379 "thread": "nvmf_tgt_poll_group_000", 00:20:48.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:48.379 "listen_address": { 00:20:48.379 "trtype": "TCP", 00:20:48.379 "adrfam": "IPv4", 00:20:48.379 "traddr": "10.0.0.2", 00:20:48.379 "trsvcid": "4420" 00:20:48.379 }, 00:20:48.379 "peer_address": { 00:20:48.379 "trtype": "TCP", 00:20:48.379 "adrfam": "IPv4", 00:20:48.379 "traddr": "10.0.0.1", 00:20:48.379 "trsvcid": "40040" 00:20:48.379 }, 00:20:48.379 "auth": { 00:20:48.379 "state": "completed", 00:20:48.379 "digest": "sha384", 00:20:48.379 "dhgroup": "ffdhe8192" 00:20:48.379 } 00:20:48.379 } 00:20:48.379 ]' 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.379 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.640 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:48.640 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.579 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.580 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.580 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.580 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.580 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.580 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.150 00:20:50.150 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.150 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.150 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.411 { 00:20:50.411 "cntlid": 91, 00:20:50.411 "qid": 0, 00:20:50.411 "state": "enabled", 00:20:50.411 "thread": "nvmf_tgt_poll_group_000", 00:20:50.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:50.411 "listen_address": { 00:20:50.411 "trtype": "TCP", 00:20:50.411 "adrfam": "IPv4", 00:20:50.411 "traddr": "10.0.0.2", 00:20:50.411 "trsvcid": "4420" 00:20:50.411 }, 00:20:50.411 "peer_address": { 00:20:50.411 "trtype": "TCP", 00:20:50.411 "adrfam": "IPv4", 00:20:50.411 "traddr": "10.0.0.1", 00:20:50.411 "trsvcid": "40078" 00:20:50.411 }, 00:20:50.411 "auth": { 00:20:50.411 "state": "completed", 00:20:50.411 "digest": "sha384", 00:20:50.411 "dhgroup": "ffdhe8192" 00:20:50.411 } 00:20:50.411 } 00:20:50.411 ]' 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.411 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.671 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:50.671 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:51.242 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.502 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.072 00:20:52.072 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.072 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.072 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.331 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.331 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.331 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.331 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.331 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.331 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.331 { 00:20:52.331 "cntlid": 93, 00:20:52.331 "qid": 0, 00:20:52.331 "state": "enabled", 00:20:52.331 "thread": "nvmf_tgt_poll_group_000", 00:20:52.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:52.331 "listen_address": { 00:20:52.331 "trtype": "TCP", 00:20:52.331 "adrfam": "IPv4", 00:20:52.331 "traddr": "10.0.0.2", 00:20:52.331 "trsvcid": "4420" 00:20:52.331 }, 00:20:52.331 "peer_address": { 00:20:52.331 "trtype": "TCP", 00:20:52.331 "adrfam": "IPv4", 00:20:52.331 "traddr": "10.0.0.1", 00:20:52.331 "trsvcid": "54280" 00:20:52.331 }, 00:20:52.331 "auth": { 00:20:52.331 "state": "completed", 00:20:52.331 "digest": "sha384", 00:20:52.331 "dhgroup": "ffdhe8192" 00:20:52.331 } 00:20:52.331 } 00:20:52.331 ]' 00:20:52.331 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.331 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.331 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.332 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.332 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.332 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.332 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.332 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.591 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:52.591 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:53.529 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.530 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:53.530 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.530 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.530 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.530 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.530 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.530 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.097 00:20:54.097 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.097 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.097 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.357 { 00:20:54.357 "cntlid": 95, 00:20:54.357 "qid": 0, 00:20:54.357 "state": "enabled", 00:20:54.357 "thread": "nvmf_tgt_poll_group_000", 00:20:54.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:54.357 "listen_address": { 00:20:54.357 "trtype": "TCP", 00:20:54.357 "adrfam": "IPv4", 00:20:54.357 "traddr": "10.0.0.2", 00:20:54.357 "trsvcid": "4420" 00:20:54.357 }, 00:20:54.357 "peer_address": { 00:20:54.357 "trtype": "TCP", 00:20:54.357 "adrfam": "IPv4", 00:20:54.357 "traddr": "10.0.0.1", 00:20:54.357 "trsvcid": "54310" 00:20:54.357 }, 00:20:54.357 "auth": { 00:20:54.357 "state": "completed", 00:20:54.357 "digest": "sha384", 00:20:54.357 "dhgroup": "ffdhe8192" 00:20:54.357 } 00:20:54.357 } 00:20:54.357 ]' 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.357 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.617 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:54.617 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.556 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.557 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.557 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.817 00:20:55.817 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.817 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.817 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.817 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.817 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.817 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.817 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.114 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.114 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.114 { 00:20:56.114 "cntlid": 97, 00:20:56.114 "qid": 0, 00:20:56.114 "state": "enabled", 00:20:56.114 "thread": "nvmf_tgt_poll_group_000", 00:20:56.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:56.114 "listen_address": { 00:20:56.114 "trtype": "TCP", 00:20:56.114 "adrfam": "IPv4", 00:20:56.114 "traddr": "10.0.0.2", 00:20:56.114 "trsvcid": "4420" 00:20:56.114 }, 00:20:56.114 "peer_address": { 00:20:56.114 "trtype": "TCP", 00:20:56.114 "adrfam": "IPv4", 00:20:56.114 "traddr": "10.0.0.1", 00:20:56.114 "trsvcid": "54336" 00:20:56.114 }, 00:20:56.114 "auth": { 00:20:56.114 "state": "completed", 00:20:56.114 "digest": "sha512", 00:20:56.114 "dhgroup": "null" 00:20:56.114 } 00:20:56.114 } 00:20:56.114 ]' 00:20:56.114 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.114 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.114 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.114 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:56.114 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.114 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.114 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.114 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.375 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:56.375 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:20:56.945 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.945 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:56.945 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.945 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.945 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.945 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.945 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:56.945 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.206 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:57.206 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.206 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.206 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.206 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:57.206 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.206 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.206 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.206 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.206 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.206 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.206 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.206 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.467 00:20:57.467 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.467 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.467 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.727 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.727 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.727 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.727 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.727 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.727 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.727 { 00:20:57.727 "cntlid": 99, 00:20:57.727 "qid": 0, 00:20:57.727 "state": "enabled", 00:20:57.727 "thread": "nvmf_tgt_poll_group_000", 00:20:57.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:57.727 "listen_address": { 00:20:57.727 "trtype": "TCP", 00:20:57.727 "adrfam": "IPv4", 00:20:57.727 "traddr": "10.0.0.2", 00:20:57.727 "trsvcid": "4420" 00:20:57.727 }, 00:20:57.727 "peer_address": { 00:20:57.727 "trtype": "TCP", 00:20:57.727 "adrfam": "IPv4", 00:20:57.727 "traddr": "10.0.0.1", 00:20:57.727 "trsvcid": "54364" 00:20:57.727 }, 00:20:57.727 "auth": { 00:20:57.727 "state": "completed", 00:20:57.727 "digest": "sha512", 00:20:57.727 "dhgroup": "null" 00:20:57.727 } 00:20:57.727 } 00:20:57.727 ]' 00:20:57.727 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.727 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.727 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.727 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:57.728 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.728 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.728 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.728 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.989 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:57.989 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:20:58.931 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.931 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:58.931 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.931 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.931 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.931 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.931 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.931 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.931 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:58.931 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.931 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.931 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:58.931 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:58.931 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.931 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.931 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.931 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.931 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.931 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.931 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.931 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.193 00:20:59.193 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.193 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.193 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.453 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.453 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.453 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.453 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.453 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.453 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.453 { 00:20:59.453 "cntlid": 101, 00:20:59.453 "qid": 0, 00:20:59.453 "state": "enabled", 00:20:59.453 "thread": "nvmf_tgt_poll_group_000", 00:20:59.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:59.453 "listen_address": { 00:20:59.453 "trtype": "TCP", 00:20:59.453 "adrfam": "IPv4", 00:20:59.453 "traddr": "10.0.0.2", 00:20:59.453 "trsvcid": "4420" 00:20:59.453 }, 00:20:59.453 "peer_address": { 00:20:59.453 "trtype": "TCP", 00:20:59.453 "adrfam": "IPv4", 00:20:59.453 "traddr": "10.0.0.1", 00:20:59.453 "trsvcid": "54382" 00:20:59.453 }, 00:20:59.453 "auth": { 00:20:59.453 "state": "completed", 00:20:59.453 "digest": "sha512", 00:20:59.453 "dhgroup": "null" 00:20:59.453 } 00:20:59.453 } 00:20:59.453 ]' 00:20:59.453 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.453 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.453 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.453 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:59.453 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.454 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.454 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.454 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.715 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:20:59.715 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:21:00.364 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.364 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:00.364 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.364 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.364 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.364 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.364 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.364 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.655 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:00.655 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.655 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.655 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:00.655 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:00.655 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.655 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:00.655 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.655 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.655 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.655 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:00.655 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.655 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.919 00:21:00.919 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.919 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.919 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.919 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.180 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.180 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.181 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.181 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.181 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.181 { 00:21:01.181 "cntlid": 103, 00:21:01.181 "qid": 0, 00:21:01.181 "state": "enabled", 00:21:01.181 "thread": "nvmf_tgt_poll_group_000", 00:21:01.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:01.181 "listen_address": { 00:21:01.181 "trtype": "TCP", 00:21:01.181 "adrfam": "IPv4", 00:21:01.181 "traddr": "10.0.0.2", 00:21:01.181 "trsvcid": "4420" 00:21:01.181 }, 00:21:01.181 "peer_address": { 00:21:01.181 "trtype": "TCP", 00:21:01.181 "adrfam": "IPv4", 00:21:01.181 "traddr": "10.0.0.1", 00:21:01.181 "trsvcid": "54418" 00:21:01.181 }, 00:21:01.181 "auth": { 00:21:01.181 "state": "completed", 00:21:01.181 "digest": "sha512", 00:21:01.181 "dhgroup": "null" 00:21:01.181 } 00:21:01.181 } 00:21:01.181 ]' 00:21:01.181 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.181 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.181 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.181 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:01.181 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.181 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.181 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.181 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.444 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:01.444 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:02.015 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.276 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.537 00:21:02.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.798 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.798 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.798 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.798 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.798 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.798 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.798 { 00:21:02.798 "cntlid": 105, 00:21:02.798 "qid": 0, 00:21:02.798 "state": "enabled", 00:21:02.798 "thread": "nvmf_tgt_poll_group_000", 00:21:02.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:02.798 "listen_address": { 00:21:02.798 "trtype": "TCP", 00:21:02.798 "adrfam": "IPv4", 00:21:02.798 "traddr": "10.0.0.2", 00:21:02.798 "trsvcid": "4420" 00:21:02.798 }, 00:21:02.798 "peer_address": { 00:21:02.798 "trtype": "TCP", 00:21:02.798 "adrfam": "IPv4", 00:21:02.798 "traddr": "10.0.0.1", 00:21:02.798 "trsvcid": "47310" 00:21:02.798 }, 00:21:02.798 "auth": { 00:21:02.798 "state": "completed", 00:21:02.798 "digest": "sha512", 00:21:02.798 "dhgroup": "ffdhe2048" 00:21:02.798 } 00:21:02.798 } 00:21:02.798 ]' 00:21:02.798 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.798 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.798 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.798 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.798 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.798 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.798 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.798 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.059 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:21:03.059 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.000 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.260 00:21:04.260 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.260 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.260 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.520 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.520 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.520 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.520 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.520 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.520 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.520 { 00:21:04.520 "cntlid": 107, 00:21:04.520 "qid": 0, 00:21:04.520 "state": "enabled", 00:21:04.520 "thread": "nvmf_tgt_poll_group_000", 00:21:04.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:04.520 "listen_address": { 00:21:04.520 "trtype": "TCP", 00:21:04.520 "adrfam": "IPv4", 00:21:04.520 "traddr": "10.0.0.2", 00:21:04.520 "trsvcid": "4420" 00:21:04.520 }, 00:21:04.520 "peer_address": { 00:21:04.520 "trtype": "TCP", 00:21:04.520 "adrfam": "IPv4", 00:21:04.520 "traddr": "10.0.0.1", 00:21:04.520 "trsvcid": "47338" 00:21:04.520 }, 00:21:04.520 "auth": { 00:21:04.520 "state": "completed", 00:21:04.520 "digest": "sha512", 00:21:04.520 "dhgroup": "ffdhe2048" 00:21:04.520 } 00:21:04.520 } 00:21:04.520 ]' 00:21:04.520 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.520 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.520 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.520 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:04.520 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.780 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.780 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.780 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.780 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:21:04.780 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:21:05.719 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.719 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:05.719 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.719 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.719 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.719 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.719 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.719 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.979 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:05.979 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.979 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.979 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:05.979 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.979 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.979 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.979 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.979 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.979 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.979 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.979 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.979 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.239 00:21:06.239 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.239 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.239 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.239 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.239 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.239 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.239 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.239 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.239 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.239 { 00:21:06.239 "cntlid": 109, 00:21:06.239 "qid": 0, 00:21:06.239 "state": "enabled", 00:21:06.239 "thread": "nvmf_tgt_poll_group_000", 00:21:06.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:06.239 "listen_address": { 00:21:06.239 "trtype": "TCP", 00:21:06.239 "adrfam": "IPv4", 00:21:06.239 "traddr": "10.0.0.2", 00:21:06.239 "trsvcid": "4420" 00:21:06.239 }, 00:21:06.239 "peer_address": { 00:21:06.239 "trtype": "TCP", 00:21:06.239 "adrfam": "IPv4", 00:21:06.239 "traddr": "10.0.0.1", 00:21:06.239 "trsvcid": "47360" 00:21:06.239 }, 00:21:06.239 "auth": { 00:21:06.239 "state": "completed", 00:21:06.239 "digest": "sha512", 00:21:06.239 "dhgroup": "ffdhe2048" 00:21:06.239 } 00:21:06.239 } 00:21:06.239 ]' 00:21:06.239 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.499 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.499 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.499 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.499 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.499 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.499 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.499 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.759 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:21:06.759 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:21:07.327 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.327 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.327 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.327 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.327 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.327 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.327 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.327 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.587 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:07.587 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.587 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.587 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:07.587 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:07.587 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.587 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:07.587 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.587 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.587 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.587 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:07.587 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.587 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.847 00:21:07.847 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.847 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.847 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.106 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.106 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.106 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.106 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.106 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.106 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.106 { 00:21:08.106 "cntlid": 111, 00:21:08.106 "qid": 0, 00:21:08.106 "state": "enabled", 00:21:08.106 "thread": "nvmf_tgt_poll_group_000", 00:21:08.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:08.106 "listen_address": { 00:21:08.106 "trtype": "TCP", 00:21:08.106 "adrfam": "IPv4", 00:21:08.106 "traddr": "10.0.0.2", 00:21:08.106 "trsvcid": "4420" 00:21:08.106 }, 00:21:08.106 "peer_address": { 00:21:08.106 "trtype": "TCP", 00:21:08.106 "adrfam": "IPv4", 00:21:08.106 "traddr": "10.0.0.1", 00:21:08.106 "trsvcid": "47380" 00:21:08.106 }, 00:21:08.106 "auth": { 00:21:08.106 "state": "completed", 00:21:08.106 "digest": "sha512", 00:21:08.106 "dhgroup": "ffdhe2048" 00:21:08.106 } 00:21:08.106 } 00:21:08.106 ]' 00:21:08.106 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.106 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.106 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.106 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.106 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.106 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.107 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.107 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.366 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:08.366 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:09.308 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.309 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.309 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.309 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.309 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.309 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.309 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.309 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.570 00:21:09.570 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.570 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.570 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.831 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.831 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.831 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.831 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.831 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.831 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.831 { 00:21:09.831 "cntlid": 113, 00:21:09.831 "qid": 0, 00:21:09.831 "state": "enabled", 00:21:09.831 "thread": "nvmf_tgt_poll_group_000", 00:21:09.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:09.831 "listen_address": { 00:21:09.831 "trtype": "TCP", 00:21:09.831 "adrfam": "IPv4", 00:21:09.831 "traddr": "10.0.0.2", 00:21:09.831 "trsvcid": "4420" 00:21:09.831 }, 00:21:09.831 "peer_address": { 00:21:09.831 "trtype": "TCP", 00:21:09.831 "adrfam": "IPv4", 00:21:09.831 "traddr": "10.0.0.1", 00:21:09.831 "trsvcid": "47404" 00:21:09.831 }, 00:21:09.831 "auth": { 00:21:09.831 "state": "completed", 00:21:09.831 "digest": "sha512", 00:21:09.831 "dhgroup": "ffdhe3072" 00:21:09.831 } 00:21:09.831 } 00:21:09.831 ]' 00:21:09.831 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.831 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.831 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.831 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:09.831 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.831 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.831 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.831 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.092 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:21:10.092 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:11.033 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.034 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.034 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.034 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.034 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.034 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.034 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.034 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.294 00:21:11.294 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.294 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.294 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.554 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.554 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.554 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.554 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.554 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.554 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.554 { 00:21:11.554 "cntlid": 115, 00:21:11.554 "qid": 0, 00:21:11.554 "state": "enabled", 00:21:11.554 "thread": "nvmf_tgt_poll_group_000", 00:21:11.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:11.554 "listen_address": { 00:21:11.554 "trtype": "TCP", 00:21:11.554 "adrfam": "IPv4", 00:21:11.554 "traddr": "10.0.0.2", 00:21:11.554 "trsvcid": "4420" 00:21:11.554 }, 00:21:11.554 "peer_address": { 00:21:11.554 "trtype": "TCP", 00:21:11.554 "adrfam": "IPv4", 00:21:11.554 "traddr": "10.0.0.1", 00:21:11.554 "trsvcid": "47424" 00:21:11.555 }, 00:21:11.555 "auth": { 00:21:11.555 "state": "completed", 00:21:11.555 "digest": "sha512", 00:21:11.555 "dhgroup": "ffdhe3072" 00:21:11.555 } 00:21:11.555 } 00:21:11.555 ]' 00:21:11.555 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.555 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.555 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.555 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.555 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.555 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.555 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.555 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.815 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:21:11.815 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.757 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.016 00:21:13.016 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.016 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.016 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.276 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.276 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.276 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.276 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.276 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.276 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.276 { 00:21:13.276 "cntlid": 117, 00:21:13.276 "qid": 0, 00:21:13.276 "state": "enabled", 00:21:13.276 "thread": "nvmf_tgt_poll_group_000", 00:21:13.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:13.276 "listen_address": { 00:21:13.276 "trtype": "TCP", 00:21:13.276 "adrfam": "IPv4", 00:21:13.276 "traddr": "10.0.0.2", 00:21:13.276 "trsvcid": "4420" 00:21:13.276 }, 00:21:13.276 "peer_address": { 00:21:13.276 "trtype": "TCP", 00:21:13.276 "adrfam": "IPv4", 00:21:13.276 "traddr": "10.0.0.1", 00:21:13.276 "trsvcid": "37448" 00:21:13.276 }, 00:21:13.276 "auth": { 00:21:13.276 "state": "completed", 00:21:13.276 "digest": "sha512", 00:21:13.276 "dhgroup": "ffdhe3072" 00:21:13.276 } 00:21:13.276 } 00:21:13.276 ]' 00:21:13.276 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.276 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.277 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.277 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.277 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.277 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.277 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.277 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.537 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:21:13.537 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:21:14.106 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.367 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:14.367 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.367 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.367 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.367 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.367 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.367 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.628 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:14.628 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.628 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.628 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:14.628 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:14.628 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.628 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:14.628 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.628 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.628 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.628 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:14.628 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.628 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.888 00:21:14.888 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.888 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.888 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.888 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.888 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.888 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.888 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.888 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.888 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.888 { 00:21:14.888 "cntlid": 119, 00:21:14.888 "qid": 0, 00:21:14.888 "state": "enabled", 00:21:14.888 "thread": "nvmf_tgt_poll_group_000", 00:21:14.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:14.888 "listen_address": { 00:21:14.888 "trtype": "TCP", 00:21:14.888 "adrfam": "IPv4", 00:21:14.888 "traddr": "10.0.0.2", 00:21:14.888 "trsvcid": "4420" 00:21:14.888 }, 00:21:14.888 "peer_address": { 00:21:14.888 "trtype": "TCP", 00:21:14.888 "adrfam": "IPv4", 00:21:14.888 "traddr": "10.0.0.1", 00:21:14.888 "trsvcid": "37484" 00:21:14.888 }, 00:21:14.888 "auth": { 00:21:14.888 "state": "completed", 00:21:14.888 "digest": "sha512", 00:21:14.888 "dhgroup": "ffdhe3072" 00:21:14.888 } 00:21:14.888 } 00:21:14.888 ]' 00:21:14.888 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.149 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.149 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.149 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.149 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.149 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.149 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.149 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:15.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:15.979 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.979 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:15.979 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.979 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.979 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.979 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.979 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.979 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.979 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:16.238 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:16.238 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.238 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.238 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:16.238 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:16.238 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.238 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.238 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.238 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.238 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.238 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.238 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.238 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.498 00:21:16.498 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.498 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.498 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.757 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.757 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.757 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.757 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.757 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.757 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.757 { 00:21:16.757 "cntlid": 121, 00:21:16.757 "qid": 0, 00:21:16.757 "state": "enabled", 00:21:16.757 "thread": "nvmf_tgt_poll_group_000", 00:21:16.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:16.757 "listen_address": { 00:21:16.757 "trtype": "TCP", 00:21:16.757 "adrfam": "IPv4", 00:21:16.757 "traddr": "10.0.0.2", 00:21:16.757 "trsvcid": "4420" 00:21:16.757 }, 00:21:16.757 "peer_address": { 00:21:16.757 "trtype": "TCP", 00:21:16.757 "adrfam": "IPv4", 00:21:16.757 "traddr": "10.0.0.1", 00:21:16.757 "trsvcid": "37508" 00:21:16.757 }, 00:21:16.757 "auth": { 00:21:16.757 "state": "completed", 00:21:16.757 "digest": "sha512", 00:21:16.757 "dhgroup": "ffdhe4096" 00:21:16.757 } 00:21:16.757 } 00:21:16.757 ]' 00:21:16.757 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.757 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.757 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.757 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.757 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.757 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.757 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.757 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.017 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:21:17.017 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.958 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.219 00:21:18.219 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.219 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.219 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.479 { 00:21:18.479 "cntlid": 123, 00:21:18.479 "qid": 0, 00:21:18.479 "state": "enabled", 00:21:18.479 "thread": "nvmf_tgt_poll_group_000", 00:21:18.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:18.479 "listen_address": { 00:21:18.479 "trtype": "TCP", 00:21:18.479 "adrfam": "IPv4", 00:21:18.479 "traddr": "10.0.0.2", 00:21:18.479 "trsvcid": "4420" 00:21:18.479 }, 00:21:18.479 "peer_address": { 00:21:18.479 "trtype": "TCP", 00:21:18.479 "adrfam": "IPv4", 00:21:18.479 "traddr": "10.0.0.1", 00:21:18.479 "trsvcid": "37516" 00:21:18.479 }, 00:21:18.479 "auth": { 00:21:18.479 "state": "completed", 00:21:18.479 "digest": "sha512", 00:21:18.479 "dhgroup": "ffdhe4096" 00:21:18.479 } 00:21:18.479 } 00:21:18.479 ]' 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.479 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.740 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:21:18.740 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.683 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.945 00:21:19.945 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.945 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.945 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.206 { 00:21:20.206 "cntlid": 125, 00:21:20.206 "qid": 0, 00:21:20.206 "state": "enabled", 00:21:20.206 "thread": "nvmf_tgt_poll_group_000", 00:21:20.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:20.206 "listen_address": { 00:21:20.206 "trtype": "TCP", 00:21:20.206 "adrfam": "IPv4", 00:21:20.206 "traddr": "10.0.0.2", 00:21:20.206 "trsvcid": "4420" 00:21:20.206 }, 00:21:20.206 "peer_address": { 00:21:20.206 "trtype": "TCP", 00:21:20.206 "adrfam": "IPv4", 00:21:20.206 "traddr": "10.0.0.1", 00:21:20.206 "trsvcid": "37554" 00:21:20.206 }, 00:21:20.206 "auth": { 00:21:20.206 "state": "completed", 00:21:20.206 "digest": "sha512", 00:21:20.206 "dhgroup": "ffdhe4096" 00:21:20.206 } 00:21:20.206 } 00:21:20.206 ]' 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.206 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.469 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:21:20.469 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:21:21.412 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.413 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.674 00:21:21.674 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.674 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.674 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.936 { 00:21:21.936 "cntlid": 127, 00:21:21.936 "qid": 0, 00:21:21.936 "state": "enabled", 00:21:21.936 "thread": "nvmf_tgt_poll_group_000", 00:21:21.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:21.936 "listen_address": { 00:21:21.936 "trtype": "TCP", 00:21:21.936 "adrfam": "IPv4", 00:21:21.936 "traddr": "10.0.0.2", 00:21:21.936 "trsvcid": "4420" 00:21:21.936 }, 00:21:21.936 "peer_address": { 00:21:21.936 "trtype": "TCP", 00:21:21.936 "adrfam": "IPv4", 00:21:21.936 "traddr": "10.0.0.1", 00:21:21.936 "trsvcid": "34556" 00:21:21.936 }, 00:21:21.936 "auth": { 00:21:21.936 "state": "completed", 00:21:21.936 "digest": "sha512", 00:21:21.936 "dhgroup": "ffdhe4096" 00:21:21.936 } 00:21:21.936 } 00:21:21.936 ]' 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.936 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.196 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:22.196 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.141 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.402 00:21:23.663 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.663 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.663 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.663 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.663 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.663 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.663 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.663 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.663 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.663 { 00:21:23.663 "cntlid": 129, 00:21:23.663 "qid": 0, 00:21:23.663 "state": "enabled", 00:21:23.663 "thread": "nvmf_tgt_poll_group_000", 00:21:23.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:23.664 "listen_address": { 00:21:23.664 "trtype": "TCP", 00:21:23.664 "adrfam": "IPv4", 00:21:23.664 "traddr": "10.0.0.2", 00:21:23.664 "trsvcid": "4420" 00:21:23.664 }, 00:21:23.664 "peer_address": { 00:21:23.664 "trtype": "TCP", 00:21:23.664 "adrfam": "IPv4", 00:21:23.664 "traddr": "10.0.0.1", 00:21:23.664 "trsvcid": "34584" 00:21:23.664 }, 00:21:23.664 "auth": { 00:21:23.664 "state": "completed", 00:21:23.664 "digest": "sha512", 00:21:23.664 "dhgroup": "ffdhe6144" 00:21:23.664 } 00:21:23.664 } 00:21:23.664 ]' 00:21:23.664 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.664 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.664 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.924 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:23.924 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.924 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.924 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.924 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.924 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:21:23.924 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.865 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.125 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.125 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.125 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.125 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.386 00:21:25.386 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.386 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.386 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.651 { 00:21:25.651 "cntlid": 131, 00:21:25.651 "qid": 0, 00:21:25.651 "state": "enabled", 00:21:25.651 "thread": "nvmf_tgt_poll_group_000", 00:21:25.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:25.651 "listen_address": { 00:21:25.651 "trtype": "TCP", 00:21:25.651 "adrfam": "IPv4", 00:21:25.651 "traddr": "10.0.0.2", 00:21:25.651 "trsvcid": "4420" 00:21:25.651 }, 00:21:25.651 "peer_address": { 00:21:25.651 "trtype": "TCP", 00:21:25.651 "adrfam": "IPv4", 00:21:25.651 "traddr": "10.0.0.1", 00:21:25.651 "trsvcid": "34618" 00:21:25.651 }, 00:21:25.651 "auth": { 00:21:25.651 "state": "completed", 00:21:25.651 "digest": "sha512", 00:21:25.651 "dhgroup": "ffdhe6144" 00:21:25.651 } 00:21:25.651 } 00:21:25.651 ]' 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.651 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.911 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:21:25.911 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:21:26.850 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.850 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.850 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.850 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.850 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.850 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.850 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.851 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.851 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:26.851 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.851 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.851 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:26.851 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:26.851 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.851 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.851 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.851 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.851 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.851 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.851 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.851 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.111 00:21:27.371 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.371 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.371 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.371 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.371 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.371 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.371 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.371 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.371 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.371 { 00:21:27.371 "cntlid": 133, 00:21:27.371 "qid": 0, 00:21:27.371 "state": "enabled", 00:21:27.371 "thread": "nvmf_tgt_poll_group_000", 00:21:27.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:27.371 "listen_address": { 00:21:27.371 "trtype": "TCP", 00:21:27.371 "adrfam": "IPv4", 00:21:27.371 "traddr": "10.0.0.2", 00:21:27.371 "trsvcid": "4420" 00:21:27.371 }, 00:21:27.371 "peer_address": { 00:21:27.371 "trtype": "TCP", 00:21:27.371 "adrfam": "IPv4", 00:21:27.371 "traddr": "10.0.0.1", 00:21:27.371 "trsvcid": "34642" 00:21:27.371 }, 00:21:27.371 "auth": { 00:21:27.371 "state": "completed", 00:21:27.371 "digest": "sha512", 00:21:27.371 "dhgroup": "ffdhe6144" 00:21:27.371 } 00:21:27.371 } 00:21:27.371 ]' 00:21:27.371 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.371 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.372 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.372 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.372 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.639 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.639 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.639 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.639 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:21:27.639 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:28.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.582 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.152 00:21:29.152 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.152 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.152 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.152 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.152 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.152 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.152 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.152 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.152 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.152 { 00:21:29.152 "cntlid": 135, 00:21:29.152 "qid": 0, 00:21:29.152 "state": "enabled", 00:21:29.152 "thread": "nvmf_tgt_poll_group_000", 00:21:29.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:29.152 "listen_address": { 00:21:29.152 "trtype": "TCP", 00:21:29.152 "adrfam": "IPv4", 00:21:29.152 "traddr": "10.0.0.2", 00:21:29.152 "trsvcid": "4420" 00:21:29.152 }, 00:21:29.152 "peer_address": { 00:21:29.152 "trtype": "TCP", 00:21:29.152 "adrfam": "IPv4", 00:21:29.152 "traddr": "10.0.0.1", 00:21:29.152 "trsvcid": "34656" 00:21:29.152 }, 00:21:29.152 "auth": { 00:21:29.152 "state": "completed", 00:21:29.152 "digest": "sha512", 00:21:29.152 "dhgroup": "ffdhe6144" 00:21:29.152 } 00:21:29.152 } 00:21:29.152 ]' 00:21:29.152 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.152 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.152 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.413 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.413 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.413 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.413 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.413 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.413 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:29.413 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:30.352 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.352 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:30.352 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.352 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.352 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.352 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.352 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.352 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.352 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.352 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:30.352 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.352 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.352 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:30.613 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:30.613 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.613 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.613 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.613 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.613 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.613 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.613 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.614 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.185 00:21:31.185 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.185 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.185 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.185 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.185 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.185 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.185 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.185 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.185 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.185 { 00:21:31.185 "cntlid": 137, 00:21:31.185 "qid": 0, 00:21:31.185 "state": "enabled", 00:21:31.185 "thread": "nvmf_tgt_poll_group_000", 00:21:31.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:31.185 "listen_address": { 00:21:31.185 "trtype": "TCP", 00:21:31.185 "adrfam": "IPv4", 00:21:31.185 "traddr": "10.0.0.2", 00:21:31.185 "trsvcid": "4420" 00:21:31.185 }, 00:21:31.185 "peer_address": { 00:21:31.185 "trtype": "TCP", 00:21:31.185 "adrfam": "IPv4", 00:21:31.185 "traddr": "10.0.0.1", 00:21:31.185 "trsvcid": "34678" 00:21:31.185 }, 00:21:31.185 "auth": { 00:21:31.185 "state": "completed", 00:21:31.185 "digest": "sha512", 00:21:31.185 "dhgroup": "ffdhe8192" 00:21:31.185 } 00:21:31.185 } 00:21:31.185 ]' 00:21:31.185 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.185 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.185 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.446 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.446 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.446 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.446 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.446 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.446 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:21:31.446 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:21:32.386 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.386 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:32.386 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.386 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.386 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.386 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:32.386 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:32.386 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:32.386 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.386 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.386 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.387 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:32.387 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.387 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.387 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.387 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.387 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.387 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.387 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.387 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.957 00:21:32.957 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.957 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.957 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.218 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.218 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.218 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.218 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.218 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.218 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.218 { 00:21:33.218 "cntlid": 139, 00:21:33.218 "qid": 0, 00:21:33.218 "state": "enabled", 00:21:33.218 "thread": "nvmf_tgt_poll_group_000", 00:21:33.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:33.218 "listen_address": { 00:21:33.218 "trtype": "TCP", 00:21:33.218 "adrfam": "IPv4", 00:21:33.218 "traddr": "10.0.0.2", 00:21:33.218 "trsvcid": "4420" 00:21:33.218 }, 00:21:33.218 "peer_address": { 00:21:33.218 "trtype": "TCP", 00:21:33.218 "adrfam": "IPv4", 00:21:33.218 "traddr": "10.0.0.1", 00:21:33.218 "trsvcid": "43698" 00:21:33.218 }, 00:21:33.218 "auth": { 00:21:33.218 "state": "completed", 00:21:33.218 "digest": "sha512", 00:21:33.218 "dhgroup": "ffdhe8192" 00:21:33.218 } 00:21:33.218 } 00:21:33.218 ]' 00:21:33.218 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.218 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.218 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.218 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.218 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.479 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.479 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.479 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.479 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:21:33.479 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: --dhchap-ctrl-secret DHHC-1:02:YjdkYTc2ZTFjMWI4MGQ2ODc3NGMyNDg4NGIxMDM5NGIwOGExZjM4YjliNWEwZWMwNjjntw==: 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.420 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.991 00:21:34.991 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.991 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.991 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.251 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.251 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.251 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.251 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.251 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.251 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.251 { 00:21:35.251 "cntlid": 141, 00:21:35.251 "qid": 0, 00:21:35.251 "state": "enabled", 00:21:35.251 "thread": "nvmf_tgt_poll_group_000", 00:21:35.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:35.251 "listen_address": { 00:21:35.251 "trtype": "TCP", 00:21:35.251 "adrfam": "IPv4", 00:21:35.251 "traddr": "10.0.0.2", 00:21:35.251 "trsvcid": "4420" 00:21:35.251 }, 00:21:35.251 "peer_address": { 00:21:35.251 "trtype": "TCP", 00:21:35.251 "adrfam": "IPv4", 00:21:35.251 "traddr": "10.0.0.1", 00:21:35.251 "trsvcid": "43732" 00:21:35.251 }, 00:21:35.251 "auth": { 00:21:35.251 "state": "completed", 00:21:35.251 "digest": "sha512", 00:21:35.251 "dhgroup": "ffdhe8192" 00:21:35.251 } 00:21:35.251 } 00:21:35.251 ]' 00:21:35.251 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.251 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.251 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.251 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.251 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.511 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.511 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.511 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.511 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:21:35.511 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:01:ZjlmMzYyNTIzOTAwYzE5ZDU4YjlhOThhZDM5NmQwMjmQFsuP: 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.451 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.019 00:21:37.019 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.019 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.019 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.278 { 00:21:37.278 "cntlid": 143, 00:21:37.278 "qid": 0, 00:21:37.278 "state": "enabled", 00:21:37.278 "thread": "nvmf_tgt_poll_group_000", 00:21:37.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:37.278 "listen_address": { 00:21:37.278 "trtype": "TCP", 00:21:37.278 "adrfam": "IPv4", 00:21:37.278 "traddr": "10.0.0.2", 00:21:37.278 "trsvcid": "4420" 00:21:37.278 }, 00:21:37.278 "peer_address": { 00:21:37.278 "trtype": "TCP", 00:21:37.278 "adrfam": "IPv4", 00:21:37.278 "traddr": "10.0.0.1", 00:21:37.278 "trsvcid": "43752" 00:21:37.278 }, 00:21:37.278 "auth": { 00:21:37.278 "state": "completed", 00:21:37.278 "digest": "sha512", 00:21:37.278 "dhgroup": "ffdhe8192" 00:21:37.278 } 00:21:37.278 } 00:21:37.278 ]' 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.278 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.538 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:37.538 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.478 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.047 00:21:39.047 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.047 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.047 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.307 { 00:21:39.307 "cntlid": 145, 00:21:39.307 "qid": 0, 00:21:39.307 "state": "enabled", 00:21:39.307 "thread": "nvmf_tgt_poll_group_000", 00:21:39.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:39.307 "listen_address": { 00:21:39.307 "trtype": "TCP", 00:21:39.307 "adrfam": "IPv4", 00:21:39.307 "traddr": "10.0.0.2", 00:21:39.307 "trsvcid": "4420" 00:21:39.307 }, 00:21:39.307 "peer_address": { 00:21:39.307 "trtype": "TCP", 00:21:39.307 "adrfam": "IPv4", 00:21:39.307 "traddr": "10.0.0.1", 00:21:39.307 "trsvcid": "43764" 00:21:39.307 }, 00:21:39.307 "auth": { 00:21:39.307 "state": "completed", 00:21:39.307 "digest": "sha512", 00:21:39.307 "dhgroup": "ffdhe8192" 00:21:39.307 } 00:21:39.307 } 00:21:39.307 ]' 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.307 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.567 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:21:39.567 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmRlMWU5NGY1MTIzNmZjNDY3MTA5MzIyMTA3NmI0Y2M4MTNkZTEzYzE3YjNkMWE1uCJ04g==: --dhchap-ctrl-secret DHHC-1:03:MWFjNmViMjliMzg3ZTRmYmQxZmI5NzUyMjk4YzRhZjVmZDdjNmIxODhiZjA4NTVkZDAzZDExYjEwMDgzMTA3YVFW35I=: 00:21:40.209 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.209 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:40.209 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.209 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:40.512 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:40.773 request: 00:21:40.773 { 00:21:40.773 "name": "nvme0", 00:21:40.773 "trtype": "tcp", 00:21:40.773 "traddr": "10.0.0.2", 00:21:40.773 "adrfam": "ipv4", 00:21:40.773 "trsvcid": "4420", 00:21:40.773 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:40.773 "prchk_reftag": false, 00:21:40.773 "prchk_guard": false, 00:21:40.773 "hdgst": false, 00:21:40.773 "ddgst": false, 00:21:40.773 "dhchap_key": "key2", 00:21:40.773 "allow_unrecognized_csi": false, 00:21:40.773 "method": "bdev_nvme_attach_controller", 00:21:40.773 "req_id": 1 00:21:40.773 } 00:21:40.773 Got JSON-RPC error response 00:21:40.773 response: 00:21:40.773 { 00:21:40.773 "code": -5, 00:21:40.773 "message": "Input/output error" 00:21:40.773 } 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.773 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:41.343 request: 00:21:41.343 { 00:21:41.343 "name": "nvme0", 00:21:41.343 "trtype": "tcp", 00:21:41.343 "traddr": "10.0.0.2", 00:21:41.343 "adrfam": "ipv4", 00:21:41.343 "trsvcid": "4420", 00:21:41.343 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:41.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:41.343 "prchk_reftag": false, 00:21:41.343 "prchk_guard": false, 00:21:41.343 "hdgst": false, 00:21:41.343 "ddgst": false, 00:21:41.343 "dhchap_key": "key1", 00:21:41.343 "dhchap_ctrlr_key": "ckey2", 00:21:41.343 "allow_unrecognized_csi": false, 00:21:41.343 "method": "bdev_nvme_attach_controller", 00:21:41.343 "req_id": 1 00:21:41.343 } 00:21:41.343 Got JSON-RPC error response 00:21:41.343 response: 00:21:41.343 { 00:21:41.343 "code": -5, 00:21:41.343 "message": "Input/output error" 00:21:41.343 } 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.343 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.913 request: 00:21:41.913 { 00:21:41.913 "name": "nvme0", 00:21:41.913 "trtype": "tcp", 00:21:41.913 "traddr": "10.0.0.2", 00:21:41.913 "adrfam": "ipv4", 00:21:41.913 "trsvcid": "4420", 00:21:41.913 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:41.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:41.913 "prchk_reftag": false, 00:21:41.913 "prchk_guard": false, 00:21:41.913 "hdgst": false, 00:21:41.913 "ddgst": false, 00:21:41.913 "dhchap_key": "key1", 00:21:41.913 "dhchap_ctrlr_key": "ckey1", 00:21:41.913 "allow_unrecognized_csi": false, 00:21:41.913 "method": "bdev_nvme_attach_controller", 00:21:41.913 "req_id": 1 00:21:41.913 } 00:21:41.913 Got JSON-RPC error response 00:21:41.913 response: 00:21:41.913 { 00:21:41.913 "code": -5, 00:21:41.913 "message": "Input/output error" 00:21:41.913 } 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2497222 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2497222 ']' 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2497222 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2497222 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2497222' 00:21:41.913 killing process with pid 2497222 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2497222 00:21:41.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2497222 00:21:42.854 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:42.854 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:42.855 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.855 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.855 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:42.855 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2524990 00:21:42.855 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2524990 00:21:42.855 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2524990 ']' 00:21:42.855 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.855 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.855 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.855 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.855 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2524990 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2524990 ']' 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.799 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.799 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:43.799 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:43.799 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.799 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.060 null0 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1ob 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.5N7 ]] 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5N7 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YlX 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.060 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.z4m ]] 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z4m 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.630 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.cRt ]] 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cRt 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.061 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.333 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.333 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:44.333 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.VHi 00:21:44.333 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.333 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.334 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.278 nvme0n1 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.278 { 00:21:45.278 "cntlid": 1, 00:21:45.278 "qid": 0, 00:21:45.278 "state": "enabled", 00:21:45.278 "thread": "nvmf_tgt_poll_group_000", 00:21:45.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:45.278 "listen_address": { 00:21:45.278 "trtype": "TCP", 00:21:45.278 "adrfam": "IPv4", 00:21:45.278 "traddr": "10.0.0.2", 00:21:45.278 "trsvcid": "4420" 00:21:45.278 }, 00:21:45.278 "peer_address": { 00:21:45.278 "trtype": "TCP", 00:21:45.278 "adrfam": "IPv4", 00:21:45.278 "traddr": "10.0.0.1", 00:21:45.278 "trsvcid": "39694" 00:21:45.278 }, 00:21:45.278 "auth": { 00:21:45.278 "state": "completed", 00:21:45.278 "digest": "sha512", 00:21:45.278 "dhgroup": "ffdhe8192" 00:21:45.278 } 00:21:45.278 } 00:21:45.278 ]' 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.278 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.539 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:45.539 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.482 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.743 request: 00:21:46.743 { 00:21:46.743 "name": "nvme0", 00:21:46.743 "trtype": "tcp", 00:21:46.743 "traddr": "10.0.0.2", 00:21:46.743 "adrfam": "ipv4", 00:21:46.743 "trsvcid": "4420", 00:21:46.743 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:46.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:46.743 "prchk_reftag": false, 00:21:46.743 "prchk_guard": false, 00:21:46.743 "hdgst": false, 00:21:46.743 "ddgst": false, 00:21:46.743 "dhchap_key": "key3", 00:21:46.743 "allow_unrecognized_csi": false, 00:21:46.743 "method": "bdev_nvme_attach_controller", 00:21:46.743 "req_id": 1 00:21:46.743 } 00:21:46.743 Got JSON-RPC error response 00:21:46.743 response: 00:21:46.743 { 00:21:46.743 "code": -5, 00:21:46.743 "message": "Input/output error" 00:21:46.743 } 00:21:46.743 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:46.744 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:46.744 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:46.744 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:46.744 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:46.744 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:46.744 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:46.744 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:47.004 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:47.004 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:47.004 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.005 request: 00:21:47.005 { 00:21:47.005 "name": "nvme0", 00:21:47.005 "trtype": "tcp", 00:21:47.005 "traddr": "10.0.0.2", 00:21:47.005 "adrfam": "ipv4", 00:21:47.005 "trsvcid": "4420", 00:21:47.005 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:47.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:47.005 "prchk_reftag": false, 00:21:47.005 "prchk_guard": false, 00:21:47.005 "hdgst": false, 00:21:47.005 "ddgst": false, 00:21:47.005 "dhchap_key": "key3", 00:21:47.005 "allow_unrecognized_csi": false, 00:21:47.005 "method": "bdev_nvme_attach_controller", 00:21:47.005 "req_id": 1 00:21:47.005 } 00:21:47.005 Got JSON-RPC error response 00:21:47.005 response: 00:21:47.005 { 00:21:47.005 "code": -5, 00:21:47.005 "message": "Input/output error" 00:21:47.005 } 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:47.005 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:47.266 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.266 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.266 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.266 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.266 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.267 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.267 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.267 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.267 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:47.267 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:47.267 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:47.267 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:47.267 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.267 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:47.267 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.267 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:47.267 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:47.267 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:47.527 request: 00:21:47.527 { 00:21:47.527 "name": "nvme0", 00:21:47.527 "trtype": "tcp", 00:21:47.527 "traddr": "10.0.0.2", 00:21:47.527 "adrfam": "ipv4", 00:21:47.527 "trsvcid": "4420", 00:21:47.527 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:47.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:47.527 "prchk_reftag": false, 00:21:47.527 "prchk_guard": false, 00:21:47.527 "hdgst": false, 00:21:47.527 "ddgst": false, 00:21:47.527 "dhchap_key": "key0", 00:21:47.527 "dhchap_ctrlr_key": "key1", 00:21:47.527 "allow_unrecognized_csi": false, 00:21:47.527 "method": "bdev_nvme_attach_controller", 00:21:47.527 "req_id": 1 00:21:47.527 } 00:21:47.527 Got JSON-RPC error response 00:21:47.527 response: 00:21:47.527 { 00:21:47.527 "code": -5, 00:21:47.527 "message": "Input/output error" 00:21:47.527 } 00:21:47.527 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:47.527 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.527 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.527 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.527 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:47.527 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:47.527 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:47.786 nvme0n1 00:21:47.786 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:47.786 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:47.786 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.046 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.046 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.046 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.307 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:21:48.307 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.307 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.307 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.307 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:48.307 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:48.307 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:49.252 nvme0n1 00:21:49.252 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:49.252 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:49.252 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.252 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.252 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:49.252 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.252 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.252 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.252 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:49.252 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:49.252 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.514 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.514 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:49.514 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: --dhchap-ctrl-secret DHHC-1:03:NDU0Y2RiMzM4ZTk4ZDc0NjkxZWNjOGNlYTMxMWRjMGVjMGRmYWRiNmFiYjA0MGJhZDg2ZjRiNzM5N2UzODk3MeUA8XM=: 00:21:50.085 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:50.085 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:50.085 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:50.085 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:50.085 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:50.085 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:50.085 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:50.085 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.085 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.347 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:50.347 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:50.347 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:50.347 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:50.347 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.347 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:50.347 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.348 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:50.348 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:50.348 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:50.920 request: 00:21:50.920 { 00:21:50.920 "name": "nvme0", 00:21:50.920 "trtype": "tcp", 00:21:50.920 "traddr": "10.0.0.2", 00:21:50.920 "adrfam": "ipv4", 00:21:50.920 "trsvcid": "4420", 00:21:50.920 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:50.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:50.920 "prchk_reftag": false, 00:21:50.920 "prchk_guard": false, 00:21:50.920 "hdgst": false, 00:21:50.920 "ddgst": false, 00:21:50.920 "dhchap_key": "key1", 00:21:50.920 "allow_unrecognized_csi": false, 00:21:50.920 "method": "bdev_nvme_attach_controller", 00:21:50.920 "req_id": 1 00:21:50.920 } 00:21:50.920 Got JSON-RPC error response 00:21:50.920 response: 00:21:50.920 { 00:21:50.920 "code": -5, 00:21:50.920 "message": "Input/output error" 00:21:50.920 } 00:21:50.920 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:50.920 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:50.920 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:50.920 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:50.920 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:50.920 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:50.920 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:51.863 nvme0n1 00:21:51.863 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:51.863 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:51.863 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.863 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.863 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.863 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.124 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:52.124 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.124 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.124 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.124 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:52.124 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:52.124 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:52.384 nvme0n1 00:21:52.384 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:52.384 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:52.384 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.384 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.384 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.384 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: '' 2s 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: ]] 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ODYzM2QzZTg0MGU1NjZjZTQ5MTE5ZTY0YTZlMDM1ODVy/zfx: 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:52.645 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:54.552 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:54.552 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: 2s 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: ]] 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:M2RjZWM0MjY5OGI0MGQ5NmZhNTczZDhmMzE2MzQ0MmRjNGRmOTQ1MmNiOWYyYzg4Mv7+aQ==: 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:54.813 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:56.720 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:56.720 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:56.720 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:56.720 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:56.720 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:56.720 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:56.720 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:56.720 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.720 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:56.720 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.720 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.720 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.720 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:56.720 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:56.720 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:57.659 nvme0n1 00:21:57.659 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:57.659 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.659 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.659 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.659 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:57.659 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:58.231 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:58.231 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:58.231 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.231 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.231 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:58.231 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.231 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.493 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.493 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:58.493 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:58.493 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:58.493 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:58.493 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:58.754 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:59.328 request: 00:21:59.328 { 00:21:59.328 "name": "nvme0", 00:21:59.328 "dhchap_key": "key1", 00:21:59.328 "dhchap_ctrlr_key": "key3", 00:21:59.328 "method": "bdev_nvme_set_keys", 00:21:59.328 "req_id": 1 00:21:59.328 } 00:21:59.328 Got JSON-RPC error response 00:21:59.328 response: 00:21:59.328 { 00:21:59.328 "code": -13, 00:21:59.328 "message": "Permission denied" 00:21:59.328 } 00:21:59.328 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:59.328 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:59.328 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:59.328 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:59.328 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:59.328 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:59.328 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.328 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:59.328 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:00.271 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:00.271 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:00.271 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.532 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:00.532 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:00.532 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.532 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.532 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.532 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:00.532 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:00.532 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:01.474 nvme0n1 00:22:01.474 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:01.474 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.474 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.474 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.474 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:01.474 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:01.474 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:01.474 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:01.474 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.474 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:01.474 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.474 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:01.474 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:02.046 request: 00:22:02.046 { 00:22:02.046 "name": "nvme0", 00:22:02.046 "dhchap_key": "key2", 00:22:02.046 "dhchap_ctrlr_key": "key0", 00:22:02.046 "method": "bdev_nvme_set_keys", 00:22:02.046 "req_id": 1 00:22:02.046 } 00:22:02.046 Got JSON-RPC error response 00:22:02.046 response: 00:22:02.046 { 00:22:02.046 "code": -13, 00:22:02.046 "message": "Permission denied" 00:22:02.046 } 00:22:02.046 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:02.046 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:02.046 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:02.046 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:02.046 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:02.046 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.046 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:02.046 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:02.046 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2497573 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2497573 ']' 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2497573 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2497573 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2497573' 00:22:03.430 killing process with pid 2497573 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2497573 00:22:03.430 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2497573 00:22:04.373 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:04.373 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:04.373 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:04.633 rmmod nvme_tcp 00:22:04.633 rmmod nvme_fabrics 00:22:04.633 rmmod nvme_keyring 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2524990 ']' 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2524990 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2524990 ']' 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2524990 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2524990 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2524990' 00:22:04.633 killing process with pid 2524990 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2524990 00:22:04.633 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2524990 00:22:05.573 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:05.573 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:05.573 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:05.573 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:05.573 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:05.573 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:05.573 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:05.573 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:05.573 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:05.573 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.573 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.573 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.482 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:07.482 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1ob /tmp/spdk.key-sha256.YlX /tmp/spdk.key-sha384.630 /tmp/spdk.key-sha512.VHi /tmp/spdk.key-sha512.5N7 /tmp/spdk.key-sha384.z4m /tmp/spdk.key-sha256.cRt '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:07.482 00:22:07.482 real 2m47.346s 00:22:07.482 user 6m12.019s 00:22:07.482 sys 0m24.308s 00:22:07.482 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.482 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.482 ************************************ 00:22:07.482 END TEST nvmf_auth_target 00:22:07.482 ************************************ 00:22:07.482 11:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:07.482 11:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:07.482 11:34:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:07.482 11:34:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.482 11:34:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:07.482 ************************************ 00:22:07.482 START TEST nvmf_bdevio_no_huge 00:22:07.482 ************************************ 00:22:07.482 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:07.744 * Looking for test storage... 00:22:07.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.744 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:07.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.744 --rc genhtml_branch_coverage=1 00:22:07.744 --rc genhtml_function_coverage=1 00:22:07.744 --rc genhtml_legend=1 00:22:07.744 --rc geninfo_all_blocks=1 00:22:07.744 --rc geninfo_unexecuted_blocks=1 00:22:07.744 00:22:07.744 ' 00:22:07.744 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:07.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.744 --rc genhtml_branch_coverage=1 00:22:07.744 --rc genhtml_function_coverage=1 00:22:07.745 --rc genhtml_legend=1 00:22:07.745 --rc geninfo_all_blocks=1 00:22:07.745 --rc geninfo_unexecuted_blocks=1 00:22:07.745 00:22:07.745 ' 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.745 --rc genhtml_branch_coverage=1 00:22:07.745 --rc genhtml_function_coverage=1 00:22:07.745 --rc genhtml_legend=1 00:22:07.745 --rc geninfo_all_blocks=1 00:22:07.745 --rc geninfo_unexecuted_blocks=1 00:22:07.745 00:22:07.745 ' 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.745 --rc genhtml_branch_coverage=1 00:22:07.745 --rc genhtml_function_coverage=1 00:22:07.745 --rc genhtml_legend=1 00:22:07.745 --rc geninfo_all_blocks=1 00:22:07.745 --rc geninfo_unexecuted_blocks=1 00:22:07.745 00:22:07.745 ' 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:07.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:07.745 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.892 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:15.893 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:15.893 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:15.893 Found net devices under 0000:31:00.0: cvl_0_0 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:15.893 Found net devices under 0000:31:00.1: cvl_0_1 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:22:15.893 00:22:15.893 --- 10.0.0.2 ping statistics --- 00:22:15.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.893 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:22:15.893 00:22:15.893 --- 10.0.0.1 ping statistics --- 00:22:15.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.893 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2533730 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2533730 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2533730 ']' 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.893 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.893 [2024-12-07 11:34:14.483508] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:15.893 [2024-12-07 11:34:14.483645] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:15.893 [2024-12-07 11:34:14.670398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.893 [2024-12-07 11:34:14.789151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.893 [2024-12-07 11:34:14.789203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.893 [2024-12-07 11:34:14.789216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.893 [2024-12-07 11:34:14.789229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.893 [2024-12-07 11:34:14.789238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.893 [2024-12-07 11:34:14.791451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:15.893 [2024-12-07 11:34:14.791584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:15.893 [2024-12-07 11:34:14.791696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.893 [2024-12-07 11:34:14.791713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:16.154 [2024-12-07 11:34:15.325089] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:16.154 Malloc0 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.154 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:16.155 [2024-12-07 11:34:15.419487] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.155 { 00:22:16.155 "params": { 00:22:16.155 "name": "Nvme$subsystem", 00:22:16.155 "trtype": "$TEST_TRANSPORT", 00:22:16.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.155 "adrfam": "ipv4", 00:22:16.155 "trsvcid": "$NVMF_PORT", 00:22:16.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.155 "hdgst": ${hdgst:-false}, 00:22:16.155 "ddgst": ${ddgst:-false} 00:22:16.155 }, 00:22:16.155 "method": "bdev_nvme_attach_controller" 00:22:16.155 } 00:22:16.155 EOF 00:22:16.155 )") 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:16.155 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:16.155 "params": { 00:22:16.155 "name": "Nvme1", 00:22:16.155 "trtype": "tcp", 00:22:16.155 "traddr": "10.0.0.2", 00:22:16.155 "adrfam": "ipv4", 00:22:16.155 "trsvcid": "4420", 00:22:16.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:16.155 "hdgst": false, 00:22:16.155 "ddgst": false 00:22:16.155 }, 00:22:16.155 "method": "bdev_nvme_attach_controller" 00:22:16.155 }' 00:22:16.414 [2024-12-07 11:34:15.520341] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:16.414 [2024-12-07 11:34:15.520463] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2533867 ] 00:22:16.414 [2024-12-07 11:34:15.681627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:16.675 [2024-12-07 11:34:15.792971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.675 [2024-12-07 11:34:15.793058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.675 [2024-12-07 11:34:15.793071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.935 I/O targets: 00:22:16.935 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:16.935 00:22:16.935 00:22:16.935 CUnit - A unit testing framework for C - Version 2.1-3 00:22:16.935 http://cunit.sourceforge.net/ 00:22:16.935 00:22:16.935 00:22:16.935 Suite: bdevio tests on: Nvme1n1 00:22:16.935 Test: blockdev write read block ...passed 00:22:17.195 Test: blockdev write zeroes read block ...passed 00:22:17.195 Test: blockdev write zeroes read no split ...passed 00:22:17.195 Test: blockdev write zeroes read split ...passed 00:22:17.195 Test: blockdev write zeroes read split partial ...passed 00:22:17.195 Test: blockdev reset ...[2024-12-07 11:34:16.379122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:17.195 [2024-12-07 11:34:16.379235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039bf00 (9): Bad file descriptor 00:22:17.195 [2024-12-07 11:34:16.401143] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:17.195 passed 00:22:17.195 Test: blockdev write read 8 blocks ...passed 00:22:17.195 Test: blockdev write read size > 128k ...passed 00:22:17.195 Test: blockdev write read invalid size ...passed 00:22:17.195 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:17.195 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:17.195 Test: blockdev write read max offset ...passed 00:22:17.195 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:17.195 Test: blockdev writev readv 8 blocks ...passed 00:22:17.195 Test: blockdev writev readv 30 x 1block ...passed 00:22:17.455 Test: blockdev writev readv block ...passed 00:22:17.455 Test: blockdev writev readv size > 128k ...passed 00:22:17.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:17.455 Test: blockdev comparev and writev ...[2024-12-07 11:34:16.588041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:17.455 [2024-12-07 11:34:16.588073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:17.455 [2024-12-07 11:34:16.588094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:17.455 [2024-12-07 11:34:16.588105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.455 [2024-12-07 11:34:16.588644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:17.455 [2024-12-07 11:34:16.588657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:17.455 [2024-12-07 11:34:16.588670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:17.455 [2024-12-07 11:34:16.588678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:17.455 [2024-12-07 11:34:16.589228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:17.455 [2024-12-07 11:34:16.589242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:17.455 [2024-12-07 11:34:16.589254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:17.455 [2024-12-07 11:34:16.589262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:17.455 [2024-12-07 11:34:16.589773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:17.455 [2024-12-07 11:34:16.589787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:17.455 [2024-12-07 11:34:16.589800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:17.455 [2024-12-07 11:34:16.589809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:17.455 passed 00:22:17.455 Test: blockdev nvme passthru rw ...passed 00:22:17.455 Test: blockdev nvme passthru vendor specific ...[2024-12-07 11:34:16.674926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:17.455 [2024-12-07 11:34:16.674948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:17.455 [2024-12-07 11:34:16.675366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:17.455 [2024-12-07 11:34:16.675378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:17.455 [2024-12-07 11:34:16.675725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:17.455 [2024-12-07 11:34:16.675735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:17.455 [2024-12-07 11:34:16.676095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:17.455 [2024-12-07 11:34:16.676106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:17.455 passed 00:22:17.455 Test: blockdev nvme admin passthru ...passed 00:22:17.455 Test: blockdev copy ...passed 00:22:17.455 00:22:17.455 Run Summary: Type Total Ran Passed Failed Inactive 00:22:17.455 suites 1 1 n/a 0 0 00:22:17.455 tests 23 23 23 0 0 00:22:17.455 asserts 152 152 152 0 n/a 00:22:17.455 00:22:17.455 Elapsed time = 1.111 seconds 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.028 rmmod nvme_tcp 00:22:18.028 rmmod nvme_fabrics 00:22:18.028 rmmod nvme_keyring 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2533730 ']' 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2533730 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2533730 ']' 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2533730 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.028 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2533730 00:22:18.287 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:18.287 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:18.287 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2533730' 00:22:18.287 killing process with pid 2533730 00:22:18.287 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2533730 00:22:18.287 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2533730 00:22:18.546 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:18.546 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:18.546 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:18.546 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:18.546 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:18.546 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:18.546 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:18.546 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:18.546 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:18.546 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.546 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.546 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.111 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:21.111 00:22:21.111 real 0m13.143s 00:22:21.111 user 0m17.566s 00:22:21.111 sys 0m6.716s 00:22:21.111 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:21.111 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.111 ************************************ 00:22:21.111 END TEST nvmf_bdevio_no_huge 00:22:21.111 ************************************ 00:22:21.111 11:34:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:21.111 11:34:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:21.111 11:34:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.111 11:34:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:21.111 ************************************ 00:22:21.111 START TEST nvmf_tls 00:22:21.111 ************************************ 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:21.111 * Looking for test storage... 00:22:21.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:21.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.111 --rc genhtml_branch_coverage=1 00:22:21.111 --rc genhtml_function_coverage=1 00:22:21.111 --rc genhtml_legend=1 00:22:21.111 --rc geninfo_all_blocks=1 00:22:21.111 --rc geninfo_unexecuted_blocks=1 00:22:21.111 00:22:21.111 ' 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:21.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.111 --rc genhtml_branch_coverage=1 00:22:21.111 --rc genhtml_function_coverage=1 00:22:21.111 --rc genhtml_legend=1 00:22:21.111 --rc geninfo_all_blocks=1 00:22:21.111 --rc geninfo_unexecuted_blocks=1 00:22:21.111 00:22:21.111 ' 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:21.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.111 --rc genhtml_branch_coverage=1 00:22:21.111 --rc genhtml_function_coverage=1 00:22:21.111 --rc genhtml_legend=1 00:22:21.111 --rc geninfo_all_blocks=1 00:22:21.111 --rc geninfo_unexecuted_blocks=1 00:22:21.111 00:22:21.111 ' 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:21.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.111 --rc genhtml_branch_coverage=1 00:22:21.111 --rc genhtml_function_coverage=1 00:22:21.111 --rc genhtml_legend=1 00:22:21.111 --rc geninfo_all_blocks=1 00:22:21.111 --rc geninfo_unexecuted_blocks=1 00:22:21.111 00:22:21.111 ' 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.111 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:21.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:21.112 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:29.261 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:29.261 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.261 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:29.262 Found net devices under 0000:31:00.0: cvl_0_0 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:29.262 Found net devices under 0000:31:00.1: cvl_0_1 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:29.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:22:29.262 00:22:29.262 --- 10.0.0.2 ping statistics --- 00:22:29.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.262 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:22:29.262 00:22:29.262 --- 10.0.0.1 ping statistics --- 00:22:29.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.262 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2538618 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2538618 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2538618 ']' 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.262 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.262 [2024-12-07 11:34:27.890655] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:29.262 [2024-12-07 11:34:27.890793] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.262 [2024-12-07 11:34:28.061130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.262 [2024-12-07 11:34:28.184930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.262 [2024-12-07 11:34:28.184992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.262 [2024-12-07 11:34:28.185006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.262 [2024-12-07 11:34:28.185038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.262 [2024-12-07 11:34:28.185051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.262 [2024-12-07 11:34:28.186521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.523 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.523 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:29.523 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:29.524 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.524 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.524 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.524 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:29.524 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:29.784 true 00:22:29.784 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:29.784 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:29.785 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:29.785 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:29.785 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:30.045 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:30.045 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:30.307 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:30.307 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:30.307 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:30.307 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:30.307 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:30.569 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:30.569 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:30.569 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:30.569 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:30.830 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:30.830 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:30.830 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:30.830 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:30.830 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:31.091 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:31.091 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:31.091 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:31.351 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:31.352 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.lmh7uKvQQb 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.TEqxtPofw4 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.lmh7uKvQQb 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.TEqxtPofw4 00:22:31.613 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:31.874 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:32.135 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.lmh7uKvQQb 00:22:32.135 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lmh7uKvQQb 00:22:32.135 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:32.396 [2024-12-07 11:34:31.492296] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.396 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:32.396 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:32.707 [2024-12-07 11:34:31.821108] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.707 [2024-12-07 11:34:31.821358] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.707 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:32.707 malloc0 00:22:32.707 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:32.994 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lmh7uKvQQb 00:22:33.278 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:33.278 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.lmh7uKvQQb 00:22:45.498 Initializing NVMe Controllers 00:22:45.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:45.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:45.498 Initialization complete. Launching workers. 00:22:45.498 ======================================================== 00:22:45.498 Latency(us) 00:22:45.498 Device Information : IOPS MiB/s Average min max 00:22:45.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15268.87 59.64 4191.63 1631.76 5075.12 00:22:45.498 ======================================================== 00:22:45.498 Total : 15268.87 59.64 4191.63 1631.76 5075.12 00:22:45.498 00:22:45.498 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lmh7uKvQQb 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lmh7uKvQQb 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2541560 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2541560 /var/tmp/bdevperf.sock 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2541560 ']' 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.499 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.499 [2024-12-07 11:34:42.805931] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:45.499 [2024-12-07 11:34:42.806050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541560 ] 00:22:45.499 [2024-12-07 11:34:42.913237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.499 [2024-12-07 11:34:42.987259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.499 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.499 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:45.499 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lmh7uKvQQb 00:22:45.499 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:45.499 [2024-12-07 11:34:43.880486] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:45.499 TLSTESTn1 00:22:45.499 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:45.499 Running I/O for 10 seconds... 00:22:47.140 4256.00 IOPS, 16.62 MiB/s [2024-12-07T10:34:47.435Z] 4203.50 IOPS, 16.42 MiB/s [2024-12-07T10:34:48.375Z] 4261.33 IOPS, 16.65 MiB/s [2024-12-07T10:34:49.317Z] 4608.50 IOPS, 18.00 MiB/s [2024-12-07T10:34:50.259Z] 4719.00 IOPS, 18.43 MiB/s [2024-12-07T10:34:51.209Z] 4614.17 IOPS, 18.02 MiB/s [2024-12-07T10:34:52.147Z] 4592.86 IOPS, 17.94 MiB/s [2024-12-07T10:34:53.530Z] 4688.75 IOPS, 18.32 MiB/s [2024-12-07T10:34:54.101Z] 4661.44 IOPS, 18.21 MiB/s [2024-12-07T10:34:54.366Z] 4628.40 IOPS, 18.08 MiB/s 00:22:55.012 Latency(us) 00:22:55.012 [2024-12-07T10:34:54.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.012 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:55.012 Verification LBA range: start 0x0 length 0x2000 00:22:55.012 TLSTESTn1 : 10.04 4623.57 18.06 0.00 0.00 27623.45 7645.87 36481.71 00:22:55.012 [2024-12-07T10:34:54.366Z] =================================================================================================================== 00:22:55.012 [2024-12-07T10:34:54.366Z] Total : 4623.57 18.06 0.00 0.00 27623.45 7645.87 36481.71 00:22:55.012 { 00:22:55.012 "results": [ 00:22:55.012 { 00:22:55.012 "job": "TLSTESTn1", 00:22:55.012 "core_mask": "0x4", 00:22:55.012 "workload": "verify", 00:22:55.012 "status": "finished", 00:22:55.012 "verify_range": { 00:22:55.012 "start": 0, 00:22:55.012 "length": 8192 00:22:55.012 }, 00:22:55.012 "queue_depth": 128, 00:22:55.012 "io_size": 4096, 00:22:55.012 "runtime": 10.037475, 00:22:55.012 "iops": 4623.573159584457, 00:22:55.012 "mibps": 18.060832654626786, 00:22:55.012 "io_failed": 0, 00:22:55.012 "io_timeout": 0, 00:22:55.012 "avg_latency_us": 27623.453772615943, 00:22:55.012 "min_latency_us": 7645.866666666667, 00:22:55.012 "max_latency_us": 36481.706666666665 00:22:55.012 } 00:22:55.012 ], 00:22:55.012 "core_count": 1 00:22:55.012 } 00:22:55.012 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:55.012 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2541560 00:22:55.012 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2541560 ']' 00:22:55.012 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2541560 00:22:55.012 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:55.013 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.013 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2541560 00:22:55.013 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:55.013 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:55.013 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2541560' 00:22:55.013 killing process with pid 2541560 00:22:55.013 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2541560 00:22:55.013 Received shutdown signal, test time was about 10.000000 seconds 00:22:55.013 00:22:55.013 Latency(us) 00:22:55.013 [2024-12-07T10:34:54.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.013 [2024-12-07T10:34:54.367Z] =================================================================================================================== 00:22:55.013 [2024-12-07T10:34:54.367Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:55.013 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2541560 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TEqxtPofw4 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TEqxtPofw4 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TEqxtPofw4 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TEqxtPofw4 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2543898 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2543898 /var/tmp/bdevperf.sock 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2543898 ']' 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.583 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.583 [2024-12-07 11:34:54.768767] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:55.583 [2024-12-07 11:34:54.768880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543898 ] 00:22:55.583 [2024-12-07 11:34:54.876425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.843 [2024-12-07 11:34:54.950258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.413 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.413 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:56.413 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TEqxtPofw4 00:22:56.413 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:56.674 [2024-12-07 11:34:55.855803] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.674 [2024-12-07 11:34:55.862769] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:56.674 [2024-12-07 11:34:55.862976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (107): Transport endpoint is not connected 00:22:56.674 [2024-12-07 11:34:55.863961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:22:56.674 [2024-12-07 11:34:55.864962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:56.674 [2024-12-07 11:34:55.864979] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:56.674 [2024-12-07 11:34:55.864991] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:56.674 [2024-12-07 11:34:55.865001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:56.674 request: 00:22:56.674 { 00:22:56.674 "name": "TLSTEST", 00:22:56.674 "trtype": "tcp", 00:22:56.674 "traddr": "10.0.0.2", 00:22:56.674 "adrfam": "ipv4", 00:22:56.674 "trsvcid": "4420", 00:22:56.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:56.674 "prchk_reftag": false, 00:22:56.674 "prchk_guard": false, 00:22:56.674 "hdgst": false, 00:22:56.674 "ddgst": false, 00:22:56.674 "psk": "key0", 00:22:56.674 "allow_unrecognized_csi": false, 00:22:56.674 "method": "bdev_nvme_attach_controller", 00:22:56.674 "req_id": 1 00:22:56.674 } 00:22:56.674 Got JSON-RPC error response 00:22:56.674 response: 00:22:56.674 { 00:22:56.674 "code": -5, 00:22:56.674 "message": "Input/output error" 00:22:56.674 } 00:22:56.674 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2543898 00:22:56.674 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2543898 ']' 00:22:56.674 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2543898 00:22:56.674 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:56.674 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.674 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2543898 00:22:56.674 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:56.674 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:56.674 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2543898' 00:22:56.674 killing process with pid 2543898 00:22:56.674 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2543898 00:22:56.674 Received shutdown signal, test time was about 10.000000 seconds 00:22:56.674 00:22:56.674 Latency(us) 00:22:56.674 [2024-12-07T10:34:56.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.674 [2024-12-07T10:34:56.028Z] =================================================================================================================== 00:22:56.674 [2024-12-07T10:34:56.028Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:56.674 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2543898 00:22:57.269 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:57.269 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:57.269 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.269 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.269 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lmh7uKvQQb 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lmh7uKvQQb 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lmh7uKvQQb 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lmh7uKvQQb 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2544250 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2544250 /var/tmp/bdevperf.sock 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2544250 ']' 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.270 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.270 [2024-12-07 11:34:56.469114] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:57.270 [2024-12-07 11:34:56.469224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544250 ] 00:22:57.270 [2024-12-07 11:34:56.577282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.530 [2024-12-07 11:34:56.651375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.100 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.100 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:58.100 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lmh7uKvQQb 00:22:58.100 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:58.361 [2024-12-07 11:34:57.532551] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.361 [2024-12-07 11:34:57.545651] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:58.361 [2024-12-07 11:34:57.545677] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:58.361 [2024-12-07 11:34:57.545707] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:58.361 [2024-12-07 11:34:57.546648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (107): Transport endpoint is not connected 00:22:58.361 [2024-12-07 11:34:57.547633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:22:58.361 [2024-12-07 11:34:57.548630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:58.361 [2024-12-07 11:34:57.548647] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:58.361 [2024-12-07 11:34:57.548657] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:58.361 [2024-12-07 11:34:57.548668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:58.361 request: 00:22:58.361 { 00:22:58.361 "name": "TLSTEST", 00:22:58.361 "trtype": "tcp", 00:22:58.361 "traddr": "10.0.0.2", 00:22:58.361 "adrfam": "ipv4", 00:22:58.361 "trsvcid": "4420", 00:22:58.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.361 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:58.361 "prchk_reftag": false, 00:22:58.361 "prchk_guard": false, 00:22:58.361 "hdgst": false, 00:22:58.361 "ddgst": false, 00:22:58.361 "psk": "key0", 00:22:58.361 "allow_unrecognized_csi": false, 00:22:58.361 "method": "bdev_nvme_attach_controller", 00:22:58.361 "req_id": 1 00:22:58.361 } 00:22:58.361 Got JSON-RPC error response 00:22:58.361 response: 00:22:58.361 { 00:22:58.361 "code": -5, 00:22:58.361 "message": "Input/output error" 00:22:58.361 } 00:22:58.361 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2544250 00:22:58.361 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2544250 ']' 00:22:58.361 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2544250 00:22:58.361 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:58.361 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.361 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544250 00:22:58.361 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:58.361 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:58.361 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544250' 00:22:58.361 killing process with pid 2544250 00:22:58.361 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2544250 00:22:58.361 Received shutdown signal, test time was about 10.000000 seconds 00:22:58.361 00:22:58.361 Latency(us) 00:22:58.361 [2024-12-07T10:34:57.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.361 [2024-12-07T10:34:57.715Z] =================================================================================================================== 00:22:58.361 [2024-12-07T10:34:57.715Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:58.361 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2544250 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lmh7uKvQQb 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lmh7uKvQQb 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lmh7uKvQQb 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lmh7uKvQQb 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2544595 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2544595 /var/tmp/bdevperf.sock 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2544595 ']' 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.929 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.929 [2024-12-07 11:34:58.147400] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:58.929 [2024-12-07 11:34:58.147511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544595 ] 00:22:58.929 [2024-12-07 11:34:58.255527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.189 [2024-12-07 11:34:58.329365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.757 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.757 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:59.757 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lmh7uKvQQb 00:22:59.757 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:00.018 [2024-12-07 11:34:59.226885] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.018 [2024-12-07 11:34:59.233256] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:00.018 [2024-12-07 11:34:59.233281] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:00.018 [2024-12-07 11:34:59.233309] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:00.018 [2024-12-07 11:34:59.233756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (107): Transport endpoint is not connected 00:23:00.018 [2024-12-07 11:34:59.234742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:23:00.018 [2024-12-07 11:34:59.235742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:00.018 [2024-12-07 11:34:59.235756] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:00.018 [2024-12-07 11:34:59.235768] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:00.018 [2024-12-07 11:34:59.235778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:00.018 request: 00:23:00.018 { 00:23:00.018 "name": "TLSTEST", 00:23:00.018 "trtype": "tcp", 00:23:00.018 "traddr": "10.0.0.2", 00:23:00.018 "adrfam": "ipv4", 00:23:00.018 "trsvcid": "4420", 00:23:00.018 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.018 "prchk_reftag": false, 00:23:00.018 "prchk_guard": false, 00:23:00.018 "hdgst": false, 00:23:00.018 "ddgst": false, 00:23:00.018 "psk": "key0", 00:23:00.018 "allow_unrecognized_csi": false, 00:23:00.018 "method": "bdev_nvme_attach_controller", 00:23:00.018 "req_id": 1 00:23:00.018 } 00:23:00.018 Got JSON-RPC error response 00:23:00.018 response: 00:23:00.018 { 00:23:00.018 "code": -5, 00:23:00.018 "message": "Input/output error" 00:23:00.018 } 00:23:00.018 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2544595 00:23:00.018 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2544595 ']' 00:23:00.018 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2544595 00:23:00.018 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:00.018 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.018 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544595 00:23:00.018 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:00.019 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:00.019 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544595' 00:23:00.019 killing process with pid 2544595 00:23:00.019 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2544595 00:23:00.019 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.019 00:23:00.019 Latency(us) 00:23:00.019 [2024-12-07T10:34:59.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.019 [2024-12-07T10:34:59.373Z] =================================================================================================================== 00:23:00.019 [2024-12-07T10:34:59.373Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:00.019 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2544595 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2544937 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:00.589 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2544937 /var/tmp/bdevperf.sock 00:23:00.590 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:00.590 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2544937 ']' 00:23:00.590 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.590 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.590 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.590 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.590 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.590 [2024-12-07 11:34:59.841257] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:00.590 [2024-12-07 11:34:59.841370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544937 ] 00:23:00.850 [2024-12-07 11:34:59.948873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.850 [2024-12-07 11:35:00.024438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.421 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.421 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:01.421 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:01.421 [2024-12-07 11:35:00.737158] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:01.421 [2024-12-07 11:35:00.737190] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:01.421 request: 00:23:01.421 { 00:23:01.421 "name": "key0", 00:23:01.421 "path": "", 00:23:01.421 "method": "keyring_file_add_key", 00:23:01.421 "req_id": 1 00:23:01.421 } 00:23:01.421 Got JSON-RPC error response 00:23:01.421 response: 00:23:01.421 { 00:23:01.421 "code": -1, 00:23:01.421 "message": "Operation not permitted" 00:23:01.421 } 00:23:01.421 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:01.681 [2024-12-07 11:35:00.889632] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:01.681 [2024-12-07 11:35:00.889672] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:01.681 request: 00:23:01.681 { 00:23:01.681 "name": "TLSTEST", 00:23:01.681 "trtype": "tcp", 00:23:01.681 "traddr": "10.0.0.2", 00:23:01.681 "adrfam": "ipv4", 00:23:01.681 "trsvcid": "4420", 00:23:01.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.681 "prchk_reftag": false, 00:23:01.681 "prchk_guard": false, 00:23:01.681 "hdgst": false, 00:23:01.681 "ddgst": false, 00:23:01.681 "psk": "key0", 00:23:01.681 "allow_unrecognized_csi": false, 00:23:01.681 "method": "bdev_nvme_attach_controller", 00:23:01.681 "req_id": 1 00:23:01.681 } 00:23:01.681 Got JSON-RPC error response 00:23:01.681 response: 00:23:01.681 { 00:23:01.681 "code": -126, 00:23:01.681 "message": "Required key not available" 00:23:01.681 } 00:23:01.681 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2544937 00:23:01.681 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2544937 ']' 00:23:01.681 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2544937 00:23:01.681 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:01.681 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.681 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544937 00:23:01.681 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:01.681 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:01.681 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544937' 00:23:01.681 killing process with pid 2544937 00:23:01.681 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2544937 00:23:01.681 Received shutdown signal, test time was about 10.000000 seconds 00:23:01.681 00:23:01.681 Latency(us) 00:23:01.681 [2024-12-07T10:35:01.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.681 [2024-12-07T10:35:01.035Z] =================================================================================================================== 00:23:01.681 [2024-12-07T10:35:01.035Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:01.682 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2544937 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2538618 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2538618 ']' 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2538618 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2538618 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2538618' 00:23:02.253 killing process with pid 2538618 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2538618 00:23:02.253 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2538618 00:23:02.825 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:02.825 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:02.825 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:02.825 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:02.825 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:02.825 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:02.825 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.jpsCLJTpOO 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.jpsCLJTpOO 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2545306 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2545306 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2545306 ']' 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.086 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.086 [2024-12-07 11:35:02.303168] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:03.086 [2024-12-07 11:35:02.303300] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.346 [2024-12-07 11:35:02.464686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.346 [2024-12-07 11:35:02.546137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.346 [2024-12-07 11:35:02.546177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.346 [2024-12-07 11:35:02.546186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.346 [2024-12-07 11:35:02.546195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.346 [2024-12-07 11:35:02.546204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.346 [2024-12-07 11:35:02.547185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.918 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.918 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:03.918 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:03.918 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:03.918 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.918 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.918 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.jpsCLJTpOO 00:23:03.918 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jpsCLJTpOO 00:23:03.918 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:03.918 [2024-12-07 11:35:03.258739] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.178 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:04.178 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:04.439 [2024-12-07 11:35:03.571508] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:04.439 [2024-12-07 11:35:03.571745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.439 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:04.439 malloc0 00:23:04.440 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:04.700 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jpsCLJTpOO 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jpsCLJTpOO 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jpsCLJTpOO 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2545724 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2545724 /var/tmp/bdevperf.sock 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2545724 ']' 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.961 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.961 [2024-12-07 11:35:04.299647] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:04.961 [2024-12-07 11:35:04.299756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545724 ] 00:23:05.222 [2024-12-07 11:35:04.409772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.222 [2024-12-07 11:35:04.483993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.793 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.793 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:05.793 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jpsCLJTpOO 00:23:06.053 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:06.312 [2024-12-07 11:35:05.414231] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:06.312 TLSTESTn1 00:23:06.312 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:06.312 Running I/O for 10 seconds... 00:23:08.268 5247.00 IOPS, 20.50 MiB/s [2024-12-07T10:35:09.009Z] 5314.00 IOPS, 20.76 MiB/s [2024-12-07T10:35:09.951Z] 5252.00 IOPS, 20.52 MiB/s [2024-12-07T10:35:10.897Z] 5256.00 IOPS, 20.53 MiB/s [2024-12-07T10:35:11.844Z] 5204.20 IOPS, 20.33 MiB/s [2024-12-07T10:35:12.786Z] 5123.00 IOPS, 20.01 MiB/s [2024-12-07T10:35:13.724Z] 4989.57 IOPS, 19.49 MiB/s [2024-12-07T10:35:14.664Z] 4928.00 IOPS, 19.25 MiB/s [2024-12-07T10:35:16.047Z] 4866.00 IOPS, 19.01 MiB/s [2024-12-07T10:35:16.047Z] 4835.40 IOPS, 18.89 MiB/s 00:23:16.693 Latency(us) 00:23:16.693 [2024-12-07T10:35:16.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.693 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:16.693 Verification LBA range: start 0x0 length 0x2000 00:23:16.693 TLSTESTn1 : 10.02 4837.21 18.90 0.00 0.00 26415.02 6253.23 43472.21 00:23:16.693 [2024-12-07T10:35:16.047Z] =================================================================================================================== 00:23:16.693 [2024-12-07T10:35:16.047Z] Total : 4837.21 18.90 0.00 0.00 26415.02 6253.23 43472.21 00:23:16.693 { 00:23:16.693 "results": [ 00:23:16.693 { 00:23:16.693 "job": "TLSTESTn1", 00:23:16.693 "core_mask": "0x4", 00:23:16.693 "workload": "verify", 00:23:16.693 "status": "finished", 00:23:16.693 "verify_range": { 00:23:16.693 "start": 0, 00:23:16.693 "length": 8192 00:23:16.693 }, 00:23:16.693 "queue_depth": 128, 00:23:16.693 "io_size": 4096, 00:23:16.693 "runtime": 10.022714, 00:23:16.693 "iops": 4837.21275494841, 00:23:16.693 "mibps": 18.895362324017228, 00:23:16.693 "io_failed": 0, 00:23:16.693 "io_timeout": 0, 00:23:16.693 "avg_latency_us": 26415.019402389888, 00:23:16.693 "min_latency_us": 6253.2266666666665, 00:23:16.693 "max_latency_us": 43472.21333333333 00:23:16.693 } 00:23:16.693 ], 00:23:16.693 "core_count": 1 00:23:16.693 } 00:23:16.693 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.693 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2545724 00:23:16.693 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2545724 ']' 00:23:16.693 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2545724 00:23:16.693 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:16.693 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.693 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2545724 00:23:16.693 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:16.693 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:16.693 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2545724' 00:23:16.693 killing process with pid 2545724 00:23:16.693 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2545724 00:23:16.693 Received shutdown signal, test time was about 10.000000 seconds 00:23:16.693 00:23:16.693 Latency(us) 00:23:16.693 [2024-12-07T10:35:16.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.693 [2024-12-07T10:35:16.047Z] =================================================================================================================== 00:23:16.693 [2024-12-07T10:35:16.047Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.693 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2545724 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.jpsCLJTpOO 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jpsCLJTpOO 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jpsCLJTpOO 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jpsCLJTpOO 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jpsCLJTpOO 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2548011 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2548011 /var/tmp/bdevperf.sock 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2548011 ']' 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.954 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.215 [2024-12-07 11:35:16.311931] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:17.215 [2024-12-07 11:35:16.312048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2548011 ] 00:23:17.215 [2024-12-07 11:35:16.420651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.215 [2024-12-07 11:35:16.493454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.788 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.788 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:17.788 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jpsCLJTpOO 00:23:18.048 [2024-12-07 11:35:17.234654] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jpsCLJTpOO': 0100666 00:23:18.048 [2024-12-07 11:35:17.234690] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:18.048 request: 00:23:18.048 { 00:23:18.048 "name": "key0", 00:23:18.048 "path": "/tmp/tmp.jpsCLJTpOO", 00:23:18.048 "method": "keyring_file_add_key", 00:23:18.048 "req_id": 1 00:23:18.048 } 00:23:18.048 Got JSON-RPC error response 00:23:18.048 response: 00:23:18.048 { 00:23:18.048 "code": -1, 00:23:18.048 "message": "Operation not permitted" 00:23:18.048 } 00:23:18.048 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:18.308 [2024-12-07 11:35:17.411179] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.308 [2024-12-07 11:35:17.411217] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:18.308 request: 00:23:18.308 { 00:23:18.308 "name": "TLSTEST", 00:23:18.308 "trtype": "tcp", 00:23:18.308 "traddr": "10.0.0.2", 00:23:18.308 "adrfam": "ipv4", 00:23:18.308 "trsvcid": "4420", 00:23:18.308 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.308 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.308 "prchk_reftag": false, 00:23:18.308 "prchk_guard": false, 00:23:18.308 "hdgst": false, 00:23:18.308 "ddgst": false, 00:23:18.308 "psk": "key0", 00:23:18.308 "allow_unrecognized_csi": false, 00:23:18.308 "method": "bdev_nvme_attach_controller", 00:23:18.308 "req_id": 1 00:23:18.308 } 00:23:18.308 Got JSON-RPC error response 00:23:18.308 response: 00:23:18.308 { 00:23:18.308 "code": -126, 00:23:18.308 "message": "Required key not available" 00:23:18.308 } 00:23:18.308 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2548011 00:23:18.308 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2548011 ']' 00:23:18.308 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2548011 00:23:18.308 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:18.308 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.308 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2548011 00:23:18.308 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:18.308 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:18.308 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2548011' 00:23:18.308 killing process with pid 2548011 00:23:18.308 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2548011 00:23:18.308 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.308 00:23:18.308 Latency(us) 00:23:18.308 [2024-12-07T10:35:17.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.308 [2024-12-07T10:35:17.662Z] =================================================================================================================== 00:23:18.308 [2024-12-07T10:35:17.662Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:18.308 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2548011 00:23:18.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:18.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:18.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:18.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:18.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:18.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2545306 00:23:18.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2545306 ']' 00:23:18.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2545306 00:23:18.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:18.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2545306 00:23:18.877 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:18.877 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:18.877 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2545306' 00:23:18.877 killing process with pid 2545306 00:23:18.877 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2545306 00:23:18.877 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2545306 00:23:19.448 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:19.448 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:19.448 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.448 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.448 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2548641 00:23:19.448 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2548641 00:23:19.448 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:19.448 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2548641 ']' 00:23:19.448 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.448 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.448 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.448 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.448 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.448 [2024-12-07 11:35:18.747363] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:19.448 [2024-12-07 11:35:18.747475] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.709 [2024-12-07 11:35:18.897470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.709 [2024-12-07 11:35:18.977224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.709 [2024-12-07 11:35:18.977261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.709 [2024-12-07 11:35:18.977270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.709 [2024-12-07 11:35:18.977279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.709 [2024-12-07 11:35:18.977288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.710 [2024-12-07 11:35:18.978233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.jpsCLJTpOO 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.jpsCLJTpOO 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.jpsCLJTpOO 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jpsCLJTpOO 00:23:20.281 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:20.543 [2024-12-07 11:35:19.733710] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.543 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:20.803 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:20.804 [2024-12-07 11:35:20.058513] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:20.804 [2024-12-07 11:35:20.058762] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.804 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:21.064 malloc0 00:23:21.064 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:21.064 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jpsCLJTpOO 00:23:21.325 [2024-12-07 11:35:20.563492] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jpsCLJTpOO': 0100666 00:23:21.325 [2024-12-07 11:35:20.563521] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:21.325 request: 00:23:21.325 { 00:23:21.325 "name": "key0", 00:23:21.325 "path": "/tmp/tmp.jpsCLJTpOO", 00:23:21.325 "method": "keyring_file_add_key", 00:23:21.325 "req_id": 1 00:23:21.325 } 00:23:21.325 Got JSON-RPC error response 00:23:21.325 response: 00:23:21.325 { 00:23:21.325 "code": -1, 00:23:21.325 "message": "Operation not permitted" 00:23:21.325 } 00:23:21.325 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.586 [2024-12-07 11:35:20.731932] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:21.586 [2024-12-07 11:35:20.731969] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:21.586 request: 00:23:21.586 { 00:23:21.586 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.586 "host": "nqn.2016-06.io.spdk:host1", 00:23:21.586 "psk": "key0", 00:23:21.586 "method": "nvmf_subsystem_add_host", 00:23:21.586 "req_id": 1 00:23:21.586 } 00:23:21.586 Got JSON-RPC error response 00:23:21.586 response: 00:23:21.586 { 00:23:21.586 "code": -32603, 00:23:21.586 "message": "Internal error" 00:23:21.586 } 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2548641 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2548641 ']' 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2548641 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2548641 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2548641' 00:23:21.586 killing process with pid 2548641 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2548641 00:23:21.586 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2548641 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.jpsCLJTpOO 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2549065 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2549065 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2549065 ']' 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.158 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.420 [2024-12-07 11:35:21.524699] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:22.420 [2024-12-07 11:35:21.524802] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.420 [2024-12-07 11:35:21.664797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.420 [2024-12-07 11:35:21.738402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.420 [2024-12-07 11:35:21.738440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.420 [2024-12-07 11:35:21.738449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.420 [2024-12-07 11:35:21.738458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.420 [2024-12-07 11:35:21.738467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.420 [2024-12-07 11:35:21.739370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.002 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.002 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:23.002 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:23.002 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:23.002 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.002 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.002 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.jpsCLJTpOO 00:23:23.002 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jpsCLJTpOO 00:23:23.002 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:23.319 [2024-12-07 11:35:22.458110] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.319 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:23.319 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:23.616 [2024-12-07 11:35:22.790933] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:23.616 [2024-12-07 11:35:22.791199] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.617 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:23.908 malloc0 00:23:23.908 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:23.908 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jpsCLJTpOO 00:23:24.169 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.169 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2549544 00:23:24.169 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.169 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:24.169 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2549544 /var/tmp/bdevperf.sock 00:23:24.169 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2549544 ']' 00:23:24.169 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.169 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.169 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.169 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.169 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.430 [2024-12-07 11:35:23.548699] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:24.430 [2024-12-07 11:35:23.548812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549544 ] 00:23:24.430 [2024-12-07 11:35:23.657971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.430 [2024-12-07 11:35:23.731995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.001 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.001 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:25.001 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jpsCLJTpOO 00:23:25.262 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:25.262 [2024-12-07 11:35:24.574033] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:25.523 TLSTESTn1 00:23:25.523 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:25.784 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:25.784 "subsystems": [ 00:23:25.784 { 00:23:25.784 "subsystem": "keyring", 00:23:25.784 "config": [ 00:23:25.784 { 00:23:25.784 "method": "keyring_file_add_key", 00:23:25.784 "params": { 00:23:25.784 "name": "key0", 00:23:25.784 "path": "/tmp/tmp.jpsCLJTpOO" 00:23:25.784 } 00:23:25.784 } 00:23:25.784 ] 00:23:25.784 }, 00:23:25.784 { 00:23:25.784 "subsystem": "iobuf", 00:23:25.784 "config": [ 00:23:25.784 { 00:23:25.784 "method": "iobuf_set_options", 00:23:25.784 "params": { 00:23:25.784 "small_pool_count": 8192, 00:23:25.784 "large_pool_count": 1024, 00:23:25.784 "small_bufsize": 8192, 00:23:25.784 "large_bufsize": 135168, 00:23:25.784 "enable_numa": false 00:23:25.784 } 00:23:25.784 } 00:23:25.784 ] 00:23:25.784 }, 00:23:25.784 { 00:23:25.784 "subsystem": "sock", 00:23:25.784 "config": [ 00:23:25.784 { 00:23:25.784 "method": "sock_set_default_impl", 00:23:25.784 "params": { 00:23:25.784 "impl_name": "posix" 00:23:25.784 } 00:23:25.784 }, 00:23:25.785 { 00:23:25.785 "method": "sock_impl_set_options", 00:23:25.785 "params": { 00:23:25.785 "impl_name": "ssl", 00:23:25.785 "recv_buf_size": 4096, 00:23:25.785 "send_buf_size": 4096, 00:23:25.785 "enable_recv_pipe": true, 00:23:25.785 "enable_quickack": false, 00:23:25.785 "enable_placement_id": 0, 00:23:25.785 "enable_zerocopy_send_server": true, 00:23:25.785 "enable_zerocopy_send_client": false, 00:23:25.785 "zerocopy_threshold": 0, 00:23:25.785 "tls_version": 0, 00:23:25.785 "enable_ktls": false 00:23:25.785 } 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "method": "sock_impl_set_options", 00:23:25.785 "params": { 00:23:25.785 "impl_name": "posix", 00:23:25.785 "recv_buf_size": 2097152, 00:23:25.785 "send_buf_size": 2097152, 00:23:25.785 "enable_recv_pipe": true, 00:23:25.785 "enable_quickack": false, 00:23:25.785 "enable_placement_id": 0, 00:23:25.785 "enable_zerocopy_send_server": true, 00:23:25.785 "enable_zerocopy_send_client": false, 00:23:25.785 "zerocopy_threshold": 0, 00:23:25.785 "tls_version": 0, 00:23:25.785 "enable_ktls": false 00:23:25.785 } 00:23:25.785 } 00:23:25.785 ] 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "subsystem": "vmd", 00:23:25.785 "config": [] 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "subsystem": "accel", 00:23:25.785 "config": [ 00:23:25.785 { 00:23:25.785 "method": "accel_set_options", 00:23:25.785 "params": { 00:23:25.785 "small_cache_size": 128, 00:23:25.785 "large_cache_size": 16, 00:23:25.785 "task_count": 2048, 00:23:25.785 "sequence_count": 2048, 00:23:25.785 "buf_count": 2048 00:23:25.785 } 00:23:25.785 } 00:23:25.785 ] 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "subsystem": "bdev", 00:23:25.785 "config": [ 00:23:25.785 { 00:23:25.785 "method": "bdev_set_options", 00:23:25.785 "params": { 00:23:25.785 "bdev_io_pool_size": 65535, 00:23:25.785 "bdev_io_cache_size": 256, 00:23:25.785 "bdev_auto_examine": true, 00:23:25.785 "iobuf_small_cache_size": 128, 00:23:25.785 "iobuf_large_cache_size": 16 00:23:25.785 } 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "method": "bdev_raid_set_options", 00:23:25.785 "params": { 00:23:25.785 "process_window_size_kb": 1024, 00:23:25.785 "process_max_bandwidth_mb_sec": 0 00:23:25.785 } 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "method": "bdev_iscsi_set_options", 00:23:25.785 "params": { 00:23:25.785 "timeout_sec": 30 00:23:25.785 } 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "method": "bdev_nvme_set_options", 00:23:25.785 "params": { 00:23:25.785 "action_on_timeout": "none", 00:23:25.785 "timeout_us": 0, 00:23:25.785 "timeout_admin_us": 0, 00:23:25.785 "keep_alive_timeout_ms": 10000, 00:23:25.785 "arbitration_burst": 0, 00:23:25.785 "low_priority_weight": 0, 00:23:25.785 "medium_priority_weight": 0, 00:23:25.785 "high_priority_weight": 0, 00:23:25.785 "nvme_adminq_poll_period_us": 10000, 00:23:25.785 "nvme_ioq_poll_period_us": 0, 00:23:25.785 "io_queue_requests": 0, 00:23:25.785 "delay_cmd_submit": true, 00:23:25.785 "transport_retry_count": 4, 00:23:25.785 "bdev_retry_count": 3, 00:23:25.785 "transport_ack_timeout": 0, 00:23:25.785 "ctrlr_loss_timeout_sec": 0, 00:23:25.785 "reconnect_delay_sec": 0, 00:23:25.785 "fast_io_fail_timeout_sec": 0, 00:23:25.785 "disable_auto_failback": false, 00:23:25.785 "generate_uuids": false, 00:23:25.785 "transport_tos": 0, 00:23:25.785 "nvme_error_stat": false, 00:23:25.785 "rdma_srq_size": 0, 00:23:25.785 "io_path_stat": false, 00:23:25.785 "allow_accel_sequence": false, 00:23:25.785 "rdma_max_cq_size": 0, 00:23:25.785 "rdma_cm_event_timeout_ms": 0, 00:23:25.785 "dhchap_digests": [ 00:23:25.785 "sha256", 00:23:25.785 "sha384", 00:23:25.785 "sha512" 00:23:25.785 ], 00:23:25.785 "dhchap_dhgroups": [ 00:23:25.785 "null", 00:23:25.785 "ffdhe2048", 00:23:25.785 "ffdhe3072", 00:23:25.785 "ffdhe4096", 00:23:25.785 "ffdhe6144", 00:23:25.785 "ffdhe8192" 00:23:25.785 ] 00:23:25.785 } 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "method": "bdev_nvme_set_hotplug", 00:23:25.785 "params": { 00:23:25.785 "period_us": 100000, 00:23:25.785 "enable": false 00:23:25.785 } 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "method": "bdev_malloc_create", 00:23:25.785 "params": { 00:23:25.785 "name": "malloc0", 00:23:25.785 "num_blocks": 8192, 00:23:25.785 "block_size": 4096, 00:23:25.785 "physical_block_size": 4096, 00:23:25.785 "uuid": "da147450-529a-4181-8673-2255ed9d3c9f", 00:23:25.785 "optimal_io_boundary": 0, 00:23:25.785 "md_size": 0, 00:23:25.785 "dif_type": 0, 00:23:25.785 "dif_is_head_of_md": false, 00:23:25.785 "dif_pi_format": 0 00:23:25.785 } 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "method": "bdev_wait_for_examine" 00:23:25.785 } 00:23:25.785 ] 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "subsystem": "nbd", 00:23:25.785 "config": [] 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "subsystem": "scheduler", 00:23:25.785 "config": [ 00:23:25.785 { 00:23:25.785 "method": "framework_set_scheduler", 00:23:25.785 "params": { 00:23:25.785 "name": "static" 00:23:25.785 } 00:23:25.785 } 00:23:25.785 ] 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "subsystem": "nvmf", 00:23:25.785 "config": [ 00:23:25.785 { 00:23:25.785 "method": "nvmf_set_config", 00:23:25.785 "params": { 00:23:25.785 "discovery_filter": "match_any", 00:23:25.785 "admin_cmd_passthru": { 00:23:25.785 "identify_ctrlr": false 00:23:25.785 }, 00:23:25.785 "dhchap_digests": [ 00:23:25.785 "sha256", 00:23:25.785 "sha384", 00:23:25.785 "sha512" 00:23:25.785 ], 00:23:25.785 "dhchap_dhgroups": [ 00:23:25.785 "null", 00:23:25.785 "ffdhe2048", 00:23:25.785 "ffdhe3072", 00:23:25.785 "ffdhe4096", 00:23:25.785 "ffdhe6144", 00:23:25.785 "ffdhe8192" 00:23:25.785 ] 00:23:25.785 } 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "method": "nvmf_set_max_subsystems", 00:23:25.785 "params": { 00:23:25.785 "max_subsystems": 1024 00:23:25.785 } 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "method": "nvmf_set_crdt", 00:23:25.785 "params": { 00:23:25.785 "crdt1": 0, 00:23:25.785 "crdt2": 0, 00:23:25.785 "crdt3": 0 00:23:25.785 } 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "method": "nvmf_create_transport", 00:23:25.785 "params": { 00:23:25.785 "trtype": "TCP", 00:23:25.785 "max_queue_depth": 128, 00:23:25.785 "max_io_qpairs_per_ctrlr": 127, 00:23:25.785 "in_capsule_data_size": 4096, 00:23:25.785 "max_io_size": 131072, 00:23:25.785 "io_unit_size": 131072, 00:23:25.785 "max_aq_depth": 128, 00:23:25.785 "num_shared_buffers": 511, 00:23:25.785 "buf_cache_size": 4294967295, 00:23:25.785 "dif_insert_or_strip": false, 00:23:25.785 "zcopy": false, 00:23:25.785 "c2h_success": false, 00:23:25.785 "sock_priority": 0, 00:23:25.785 "abort_timeout_sec": 1, 00:23:25.785 "ack_timeout": 0, 00:23:25.785 "data_wr_pool_size": 0 00:23:25.785 } 00:23:25.785 }, 00:23:25.785 { 00:23:25.785 "method": "nvmf_create_subsystem", 00:23:25.785 "params": { 00:23:25.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.785 "allow_any_host": false, 00:23:25.785 "serial_number": "SPDK00000000000001", 00:23:25.785 "model_number": "SPDK bdev Controller", 00:23:25.785 "max_namespaces": 10, 00:23:25.785 "min_cntlid": 1, 00:23:25.785 "max_cntlid": 65519, 00:23:25.785 "ana_reporting": false 00:23:25.785 } 00:23:25.786 }, 00:23:25.786 { 00:23:25.786 "method": "nvmf_subsystem_add_host", 00:23:25.786 "params": { 00:23:25.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.786 "host": "nqn.2016-06.io.spdk:host1", 00:23:25.786 "psk": "key0" 00:23:25.786 } 00:23:25.786 }, 00:23:25.786 { 00:23:25.786 "method": "nvmf_subsystem_add_ns", 00:23:25.786 "params": { 00:23:25.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.786 "namespace": { 00:23:25.786 "nsid": 1, 00:23:25.786 "bdev_name": "malloc0", 00:23:25.786 "nguid": "DA147450529A418186732255ED9D3C9F", 00:23:25.786 "uuid": "da147450-529a-4181-8673-2255ed9d3c9f", 00:23:25.786 "no_auto_visible": false 00:23:25.786 } 00:23:25.786 } 00:23:25.786 }, 00:23:25.786 { 00:23:25.786 "method": "nvmf_subsystem_add_listener", 00:23:25.786 "params": { 00:23:25.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.786 "listen_address": { 00:23:25.786 "trtype": "TCP", 00:23:25.786 "adrfam": "IPv4", 00:23:25.786 "traddr": "10.0.0.2", 00:23:25.786 "trsvcid": "4420" 00:23:25.786 }, 00:23:25.786 "secure_channel": true 00:23:25.786 } 00:23:25.786 } 00:23:25.786 ] 00:23:25.786 } 00:23:25.786 ] 00:23:25.786 }' 00:23:25.786 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:26.046 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:26.046 "subsystems": [ 00:23:26.046 { 00:23:26.046 "subsystem": "keyring", 00:23:26.046 "config": [ 00:23:26.046 { 00:23:26.046 "method": "keyring_file_add_key", 00:23:26.046 "params": { 00:23:26.046 "name": "key0", 00:23:26.046 "path": "/tmp/tmp.jpsCLJTpOO" 00:23:26.046 } 00:23:26.046 } 00:23:26.046 ] 00:23:26.046 }, 00:23:26.046 { 00:23:26.046 "subsystem": "iobuf", 00:23:26.046 "config": [ 00:23:26.046 { 00:23:26.046 "method": "iobuf_set_options", 00:23:26.046 "params": { 00:23:26.046 "small_pool_count": 8192, 00:23:26.046 "large_pool_count": 1024, 00:23:26.046 "small_bufsize": 8192, 00:23:26.046 "large_bufsize": 135168, 00:23:26.046 "enable_numa": false 00:23:26.046 } 00:23:26.046 } 00:23:26.046 ] 00:23:26.046 }, 00:23:26.046 { 00:23:26.046 "subsystem": "sock", 00:23:26.046 "config": [ 00:23:26.046 { 00:23:26.046 "method": "sock_set_default_impl", 00:23:26.046 "params": { 00:23:26.046 "impl_name": "posix" 00:23:26.046 } 00:23:26.046 }, 00:23:26.046 { 00:23:26.046 "method": "sock_impl_set_options", 00:23:26.046 "params": { 00:23:26.046 "impl_name": "ssl", 00:23:26.046 "recv_buf_size": 4096, 00:23:26.046 "send_buf_size": 4096, 00:23:26.046 "enable_recv_pipe": true, 00:23:26.046 "enable_quickack": false, 00:23:26.046 "enable_placement_id": 0, 00:23:26.046 "enable_zerocopy_send_server": true, 00:23:26.046 "enable_zerocopy_send_client": false, 00:23:26.046 "zerocopy_threshold": 0, 00:23:26.046 "tls_version": 0, 00:23:26.046 "enable_ktls": false 00:23:26.046 } 00:23:26.046 }, 00:23:26.046 { 00:23:26.046 "method": "sock_impl_set_options", 00:23:26.046 "params": { 00:23:26.046 "impl_name": "posix", 00:23:26.046 "recv_buf_size": 2097152, 00:23:26.047 "send_buf_size": 2097152, 00:23:26.047 "enable_recv_pipe": true, 00:23:26.047 "enable_quickack": false, 00:23:26.047 "enable_placement_id": 0, 00:23:26.047 "enable_zerocopy_send_server": true, 00:23:26.047 "enable_zerocopy_send_client": false, 00:23:26.047 "zerocopy_threshold": 0, 00:23:26.047 "tls_version": 0, 00:23:26.047 "enable_ktls": false 00:23:26.047 } 00:23:26.047 } 00:23:26.047 ] 00:23:26.047 }, 00:23:26.047 { 00:23:26.047 "subsystem": "vmd", 00:23:26.047 "config": [] 00:23:26.047 }, 00:23:26.047 { 00:23:26.047 "subsystem": "accel", 00:23:26.047 "config": [ 00:23:26.047 { 00:23:26.047 "method": "accel_set_options", 00:23:26.047 "params": { 00:23:26.047 "small_cache_size": 128, 00:23:26.047 "large_cache_size": 16, 00:23:26.047 "task_count": 2048, 00:23:26.047 "sequence_count": 2048, 00:23:26.047 "buf_count": 2048 00:23:26.047 } 00:23:26.047 } 00:23:26.047 ] 00:23:26.047 }, 00:23:26.047 { 00:23:26.047 "subsystem": "bdev", 00:23:26.047 "config": [ 00:23:26.047 { 00:23:26.047 "method": "bdev_set_options", 00:23:26.047 "params": { 00:23:26.047 "bdev_io_pool_size": 65535, 00:23:26.047 "bdev_io_cache_size": 256, 00:23:26.047 "bdev_auto_examine": true, 00:23:26.047 "iobuf_small_cache_size": 128, 00:23:26.047 "iobuf_large_cache_size": 16 00:23:26.047 } 00:23:26.047 }, 00:23:26.047 { 00:23:26.047 "method": "bdev_raid_set_options", 00:23:26.047 "params": { 00:23:26.047 "process_window_size_kb": 1024, 00:23:26.047 "process_max_bandwidth_mb_sec": 0 00:23:26.047 } 00:23:26.047 }, 00:23:26.047 { 00:23:26.047 "method": "bdev_iscsi_set_options", 00:23:26.047 "params": { 00:23:26.047 "timeout_sec": 30 00:23:26.047 } 00:23:26.047 }, 00:23:26.047 { 00:23:26.047 "method": "bdev_nvme_set_options", 00:23:26.047 "params": { 00:23:26.047 "action_on_timeout": "none", 00:23:26.047 "timeout_us": 0, 00:23:26.047 "timeout_admin_us": 0, 00:23:26.047 "keep_alive_timeout_ms": 10000, 00:23:26.047 "arbitration_burst": 0, 00:23:26.047 "low_priority_weight": 0, 00:23:26.047 "medium_priority_weight": 0, 00:23:26.047 "high_priority_weight": 0, 00:23:26.047 "nvme_adminq_poll_period_us": 10000, 00:23:26.047 "nvme_ioq_poll_period_us": 0, 00:23:26.047 "io_queue_requests": 512, 00:23:26.047 "delay_cmd_submit": true, 00:23:26.047 "transport_retry_count": 4, 00:23:26.047 "bdev_retry_count": 3, 00:23:26.047 "transport_ack_timeout": 0, 00:23:26.047 "ctrlr_loss_timeout_sec": 0, 00:23:26.047 "reconnect_delay_sec": 0, 00:23:26.047 "fast_io_fail_timeout_sec": 0, 00:23:26.047 "disable_auto_failback": false, 00:23:26.047 "generate_uuids": false, 00:23:26.047 "transport_tos": 0, 00:23:26.047 "nvme_error_stat": false, 00:23:26.047 "rdma_srq_size": 0, 00:23:26.047 "io_path_stat": false, 00:23:26.047 "allow_accel_sequence": false, 00:23:26.047 "rdma_max_cq_size": 0, 00:23:26.047 "rdma_cm_event_timeout_ms": 0, 00:23:26.047 "dhchap_digests": [ 00:23:26.047 "sha256", 00:23:26.047 "sha384", 00:23:26.047 "sha512" 00:23:26.047 ], 00:23:26.047 "dhchap_dhgroups": [ 00:23:26.047 "null", 00:23:26.047 "ffdhe2048", 00:23:26.047 "ffdhe3072", 00:23:26.047 "ffdhe4096", 00:23:26.047 "ffdhe6144", 00:23:26.047 "ffdhe8192" 00:23:26.047 ] 00:23:26.047 } 00:23:26.047 }, 00:23:26.047 { 00:23:26.047 "method": "bdev_nvme_attach_controller", 00:23:26.047 "params": { 00:23:26.047 "name": "TLSTEST", 00:23:26.047 "trtype": "TCP", 00:23:26.047 "adrfam": "IPv4", 00:23:26.047 "traddr": "10.0.0.2", 00:23:26.047 "trsvcid": "4420", 00:23:26.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.047 "prchk_reftag": false, 00:23:26.047 "prchk_guard": false, 00:23:26.047 "ctrlr_loss_timeout_sec": 0, 00:23:26.047 "reconnect_delay_sec": 0, 00:23:26.047 "fast_io_fail_timeout_sec": 0, 00:23:26.047 "psk": "key0", 00:23:26.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.047 "hdgst": false, 00:23:26.047 "ddgst": false, 00:23:26.047 "multipath": "multipath" 00:23:26.047 } 00:23:26.047 }, 00:23:26.047 { 00:23:26.047 "method": "bdev_nvme_set_hotplug", 00:23:26.047 "params": { 00:23:26.047 "period_us": 100000, 00:23:26.047 "enable": false 00:23:26.047 } 00:23:26.047 }, 00:23:26.047 { 00:23:26.047 "method": "bdev_wait_for_examine" 00:23:26.047 } 00:23:26.047 ] 00:23:26.047 }, 00:23:26.047 { 00:23:26.047 "subsystem": "nbd", 00:23:26.047 "config": [] 00:23:26.047 } 00:23:26.047 ] 00:23:26.047 }' 00:23:26.047 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2549544 00:23:26.047 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2549544 ']' 00:23:26.047 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2549544 00:23:26.047 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:26.047 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.047 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549544 00:23:26.047 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:26.047 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:26.047 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549544' 00:23:26.047 killing process with pid 2549544 00:23:26.047 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2549544 00:23:26.047 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.047 00:23:26.047 Latency(us) 00:23:26.047 [2024-12-07T10:35:25.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.047 [2024-12-07T10:35:25.401Z] =================================================================================================================== 00:23:26.047 [2024-12-07T10:35:25.401Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:26.047 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2549544 00:23:26.618 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2549065 00:23:26.619 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2549065 ']' 00:23:26.619 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2549065 00:23:26.619 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:26.619 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.619 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549065 00:23:26.619 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:26.619 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:26.619 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549065' 00:23:26.619 killing process with pid 2549065 00:23:26.619 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2549065 00:23:26.619 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2549065 00:23:27.189 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:27.189 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.189 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.189 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.189 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:27.189 "subsystems": [ 00:23:27.189 { 00:23:27.189 "subsystem": "keyring", 00:23:27.189 "config": [ 00:23:27.189 { 00:23:27.189 "method": "keyring_file_add_key", 00:23:27.189 "params": { 00:23:27.189 "name": "key0", 00:23:27.189 "path": "/tmp/tmp.jpsCLJTpOO" 00:23:27.189 } 00:23:27.189 } 00:23:27.189 ] 00:23:27.189 }, 00:23:27.189 { 00:23:27.189 "subsystem": "iobuf", 00:23:27.189 "config": [ 00:23:27.189 { 00:23:27.189 "method": "iobuf_set_options", 00:23:27.189 "params": { 00:23:27.189 "small_pool_count": 8192, 00:23:27.189 "large_pool_count": 1024, 00:23:27.189 "small_bufsize": 8192, 00:23:27.189 "large_bufsize": 135168, 00:23:27.189 "enable_numa": false 00:23:27.189 } 00:23:27.189 } 00:23:27.189 ] 00:23:27.189 }, 00:23:27.189 { 00:23:27.189 "subsystem": "sock", 00:23:27.189 "config": [ 00:23:27.189 { 00:23:27.189 "method": "sock_set_default_impl", 00:23:27.189 "params": { 00:23:27.189 "impl_name": "posix" 00:23:27.189 } 00:23:27.189 }, 00:23:27.189 { 00:23:27.189 "method": "sock_impl_set_options", 00:23:27.189 "params": { 00:23:27.189 "impl_name": "ssl", 00:23:27.189 "recv_buf_size": 4096, 00:23:27.189 "send_buf_size": 4096, 00:23:27.189 "enable_recv_pipe": true, 00:23:27.189 "enable_quickack": false, 00:23:27.189 "enable_placement_id": 0, 00:23:27.189 "enable_zerocopy_send_server": true, 00:23:27.189 "enable_zerocopy_send_client": false, 00:23:27.189 "zerocopy_threshold": 0, 00:23:27.189 "tls_version": 0, 00:23:27.189 "enable_ktls": false 00:23:27.189 } 00:23:27.189 }, 00:23:27.189 { 00:23:27.189 "method": "sock_impl_set_options", 00:23:27.189 "params": { 00:23:27.189 "impl_name": "posix", 00:23:27.189 "recv_buf_size": 2097152, 00:23:27.189 "send_buf_size": 2097152, 00:23:27.189 "enable_recv_pipe": true, 00:23:27.189 "enable_quickack": false, 00:23:27.189 "enable_placement_id": 0, 00:23:27.189 "enable_zerocopy_send_server": true, 00:23:27.189 "enable_zerocopy_send_client": false, 00:23:27.189 "zerocopy_threshold": 0, 00:23:27.189 "tls_version": 0, 00:23:27.189 "enable_ktls": false 00:23:27.189 } 00:23:27.189 } 00:23:27.189 ] 00:23:27.189 }, 00:23:27.189 { 00:23:27.189 "subsystem": "vmd", 00:23:27.189 "config": [] 00:23:27.189 }, 00:23:27.189 { 00:23:27.189 "subsystem": "accel", 00:23:27.189 "config": [ 00:23:27.189 { 00:23:27.189 "method": "accel_set_options", 00:23:27.189 "params": { 00:23:27.189 "small_cache_size": 128, 00:23:27.189 "large_cache_size": 16, 00:23:27.189 "task_count": 2048, 00:23:27.189 "sequence_count": 2048, 00:23:27.189 "buf_count": 2048 00:23:27.189 } 00:23:27.189 } 00:23:27.189 ] 00:23:27.189 }, 00:23:27.189 { 00:23:27.189 "subsystem": "bdev", 00:23:27.189 "config": [ 00:23:27.189 { 00:23:27.189 "method": "bdev_set_options", 00:23:27.189 "params": { 00:23:27.189 "bdev_io_pool_size": 65535, 00:23:27.189 "bdev_io_cache_size": 256, 00:23:27.189 "bdev_auto_examine": true, 00:23:27.189 "iobuf_small_cache_size": 128, 00:23:27.189 "iobuf_large_cache_size": 16 00:23:27.189 } 00:23:27.189 }, 00:23:27.189 { 00:23:27.189 "method": "bdev_raid_set_options", 00:23:27.189 "params": { 00:23:27.189 "process_window_size_kb": 1024, 00:23:27.189 "process_max_bandwidth_mb_sec": 0 00:23:27.189 } 00:23:27.189 }, 00:23:27.189 { 00:23:27.189 "method": "bdev_iscsi_set_options", 00:23:27.189 "params": { 00:23:27.189 "timeout_sec": 30 00:23:27.189 } 00:23:27.189 }, 00:23:27.189 { 00:23:27.189 "method": "bdev_nvme_set_options", 00:23:27.189 "params": { 00:23:27.189 "action_on_timeout": "none", 00:23:27.189 "timeout_us": 0, 00:23:27.189 "timeout_admin_us": 0, 00:23:27.189 "keep_alive_timeout_ms": 10000, 00:23:27.189 "arbitration_burst": 0, 00:23:27.189 "low_priority_weight": 0, 00:23:27.189 "medium_priority_weight": 0, 00:23:27.189 "high_priority_weight": 0, 00:23:27.189 "nvme_adminq_poll_period_us": 10000, 00:23:27.189 "nvme_ioq_poll_period_us": 0, 00:23:27.189 "io_queue_requests": 0, 00:23:27.189 "delay_cmd_submit": true, 00:23:27.189 "transport_retry_count": 4, 00:23:27.189 "bdev_retry_count": 3, 00:23:27.189 "transport_ack_timeout": 0, 00:23:27.189 "ctrlr_loss_timeout_sec": 0, 00:23:27.189 "reconnect_delay_sec": 0, 00:23:27.189 "fast_io_fail_timeout_sec": 0, 00:23:27.189 "disable_auto_failback": false, 00:23:27.189 "generate_uuids": false, 00:23:27.189 "transport_tos": 0, 00:23:27.189 "nvme_error_stat": false, 00:23:27.189 "rdma_srq_size": 0, 00:23:27.189 "io_path_stat": false, 00:23:27.189 "allow_accel_sequence": false, 00:23:27.189 "rdma_max_cq_size": 0, 00:23:27.189 "rdma_cm_event_timeout_ms": 0, 00:23:27.189 "dhchap_digests": [ 00:23:27.189 "sha256", 00:23:27.189 "sha384", 00:23:27.189 "sha512" 00:23:27.189 ], 00:23:27.189 "dhchap_dhgroups": [ 00:23:27.189 "null", 00:23:27.189 "ffdhe2048", 00:23:27.189 "ffdhe3072", 00:23:27.189 "ffdhe4096", 00:23:27.189 "ffdhe6144", 00:23:27.189 "ffdhe8192" 00:23:27.189 ] 00:23:27.190 } 00:23:27.190 }, 00:23:27.190 { 00:23:27.190 "method": "bdev_nvme_set_hotplug", 00:23:27.190 "params": { 00:23:27.190 "period_us": 100000, 00:23:27.190 "enable": false 00:23:27.190 } 00:23:27.190 }, 00:23:27.190 { 00:23:27.190 "method": "bdev_malloc_create", 00:23:27.190 "params": { 00:23:27.190 "name": "malloc0", 00:23:27.190 "num_blocks": 8192, 00:23:27.190 "block_size": 4096, 00:23:27.190 "physical_block_size": 4096, 00:23:27.190 "uuid": "da147450-529a-4181-8673-2255ed9d3c9f", 00:23:27.190 "optimal_io_boundary": 0, 00:23:27.190 "md_size": 0, 00:23:27.190 "dif_type": 0, 00:23:27.190 "dif_is_head_of_md": false, 00:23:27.190 "dif_pi_format": 0 00:23:27.190 } 00:23:27.190 }, 00:23:27.190 { 00:23:27.190 "method": "bdev_wait_for_examine" 00:23:27.190 } 00:23:27.190 ] 00:23:27.190 }, 00:23:27.190 { 00:23:27.190 "subsystem": "nbd", 00:23:27.190 "config": [] 00:23:27.190 }, 00:23:27.190 { 00:23:27.190 "subsystem": "scheduler", 00:23:27.190 "config": [ 00:23:27.190 { 00:23:27.190 "method": "framework_set_scheduler", 00:23:27.190 "params": { 00:23:27.190 "name": "static" 00:23:27.190 } 00:23:27.190 } 00:23:27.190 ] 00:23:27.190 }, 00:23:27.190 { 00:23:27.190 "subsystem": "nvmf", 00:23:27.190 "config": [ 00:23:27.190 { 00:23:27.190 "method": "nvmf_set_config", 00:23:27.190 "params": { 00:23:27.190 "discovery_filter": "match_any", 00:23:27.190 "admin_cmd_passthru": { 00:23:27.190 "identify_ctrlr": false 00:23:27.190 }, 00:23:27.190 "dhchap_digests": [ 00:23:27.190 "sha256", 00:23:27.190 "sha384", 00:23:27.190 "sha512" 00:23:27.190 ], 00:23:27.190 "dhchap_dhgroups": [ 00:23:27.190 "null", 00:23:27.190 "ffdhe2048", 00:23:27.190 "ffdhe3072", 00:23:27.190 "ffdhe4096", 00:23:27.190 "ffdhe6144", 00:23:27.190 "ffdhe8192" 00:23:27.190 ] 00:23:27.190 } 00:23:27.190 }, 00:23:27.190 { 00:23:27.190 "method": "nvmf_set_max_subsystems", 00:23:27.190 "params": { 00:23:27.190 "max_subsystems": 1024 00:23:27.190 } 00:23:27.190 }, 00:23:27.190 { 00:23:27.190 "method": "nvmf_set_crdt", 00:23:27.190 "params": { 00:23:27.190 "crdt1": 0, 00:23:27.190 "crdt2": 0, 00:23:27.190 "crdt3": 0 00:23:27.190 } 00:23:27.190 }, 00:23:27.190 { 00:23:27.190 "method": "nvmf_create_transport", 00:23:27.190 "params": { 00:23:27.190 "trtype": "TCP", 00:23:27.190 "max_queue_depth": 128, 00:23:27.190 "max_io_qpairs_per_ctrlr": 127, 00:23:27.190 "in_capsule_data_size": 4096, 00:23:27.190 "max_io_size": 131072, 00:23:27.190 "io_unit_size": 131072, 00:23:27.190 "max_aq_depth": 128, 00:23:27.190 "num_shared_buffers": 511, 00:23:27.190 "buf_cache_size": 4294967295, 00:23:27.190 "dif_insert_or_strip": false, 00:23:27.190 "zcopy": false, 00:23:27.190 "c2h_success": false, 00:23:27.190 "sock_priority": 0, 00:23:27.190 "abort_timeout_sec": 1, 00:23:27.190 "ack_timeout": 0, 00:23:27.190 "data_wr_pool_size": 0 00:23:27.190 } 00:23:27.190 }, 00:23:27.190 { 00:23:27.190 "method": "nvmf_create_subsystem", 00:23:27.190 "params": { 00:23:27.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.190 "allow_any_host": false, 00:23:27.190 "serial_number": "SPDK00000000000001", 00:23:27.190 "model_number": "SPDK bdev Controller", 00:23:27.190 "max_namespaces": 10, 00:23:27.190 "min_cntlid": 1, 00:23:27.190 "max_cntlid": 65519, 00:23:27.190 "ana_reporting": false 00:23:27.190 } 00:23:27.190 }, 00:23:27.190 { 00:23:27.190 "method": "nvmf_subsystem_add_host", 00:23:27.190 "params": { 00:23:27.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.190 "host": "nqn.2016-06.io.spdk:host1", 00:23:27.190 "psk": "key0" 00:23:27.190 } 00:23:27.190 }, 00:23:27.190 { 00:23:27.190 "method": "nvmf_subsystem_add_ns", 00:23:27.190 "params": { 00:23:27.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.190 "namespace": { 00:23:27.190 "nsid": 1, 00:23:27.190 "bdev_name": "malloc0", 00:23:27.190 "nguid": "DA147450529A418186732255ED9D3C9F", 00:23:27.190 "uuid": "da147450-529a-4181-8673-2255ed9d3c9f", 00:23:27.190 "no_auto_visible": false 00:23:27.190 } 00:23:27.190 } 00:23:27.190 }, 00:23:27.190 { 00:23:27.190 "method": "nvmf_subsystem_add_listener", 00:23:27.190 "params": { 00:23:27.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.190 "listen_address": { 00:23:27.190 "trtype": "TCP", 00:23:27.190 "adrfam": "IPv4", 00:23:27.190 "traddr": "10.0.0.2", 00:23:27.190 "trsvcid": "4420" 00:23:27.190 }, 00:23:27.190 "secure_channel": true 00:23:27.190 } 00:23:27.190 } 00:23:27.190 ] 00:23:27.190 } 00:23:27.190 ] 00:23:27.190 }' 00:23:27.190 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2550121 00:23:27.190 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2550121 00:23:27.190 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:27.190 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2550121 ']' 00:23:27.190 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.190 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.190 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.190 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.190 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.190 [2024-12-07 11:35:26.475726] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:27.190 [2024-12-07 11:35:26.475835] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.451 [2024-12-07 11:35:26.619530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.451 [2024-12-07 11:35:26.695846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.451 [2024-12-07 11:35:26.695884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.451 [2024-12-07 11:35:26.695892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.451 [2024-12-07 11:35:26.695901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.451 [2024-12-07 11:35:26.695909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.451 [2024-12-07 11:35:26.696854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.710 [2024-12-07 11:35:27.035822] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.971 [2024-12-07 11:35:27.067852] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:27.971 [2024-12-07 11:35:27.068113] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2550407 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2550407 /var/tmp/bdevperf.sock 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2550407 ']' 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.971 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:27.971 "subsystems": [ 00:23:27.971 { 00:23:27.971 "subsystem": "keyring", 00:23:27.971 "config": [ 00:23:27.971 { 00:23:27.971 "method": "keyring_file_add_key", 00:23:27.971 "params": { 00:23:27.971 "name": "key0", 00:23:27.971 "path": "/tmp/tmp.jpsCLJTpOO" 00:23:27.971 } 00:23:27.971 } 00:23:27.971 ] 00:23:27.971 }, 00:23:27.971 { 00:23:27.971 "subsystem": "iobuf", 00:23:27.971 "config": [ 00:23:27.971 { 00:23:27.971 "method": "iobuf_set_options", 00:23:27.971 "params": { 00:23:27.971 "small_pool_count": 8192, 00:23:27.971 "large_pool_count": 1024, 00:23:27.971 "small_bufsize": 8192, 00:23:27.971 "large_bufsize": 135168, 00:23:27.971 "enable_numa": false 00:23:27.971 } 00:23:27.971 } 00:23:27.971 ] 00:23:27.971 }, 00:23:27.971 { 00:23:27.971 "subsystem": "sock", 00:23:27.971 "config": [ 00:23:27.971 { 00:23:27.971 "method": "sock_set_default_impl", 00:23:27.971 "params": { 00:23:27.971 "impl_name": "posix" 00:23:27.971 } 00:23:27.971 }, 00:23:27.971 { 00:23:27.971 "method": "sock_impl_set_options", 00:23:27.971 "params": { 00:23:27.971 "impl_name": "ssl", 00:23:27.971 "recv_buf_size": 4096, 00:23:27.971 "send_buf_size": 4096, 00:23:27.971 "enable_recv_pipe": true, 00:23:27.971 "enable_quickack": false, 00:23:27.971 "enable_placement_id": 0, 00:23:27.971 "enable_zerocopy_send_server": true, 00:23:27.971 "enable_zerocopy_send_client": false, 00:23:27.971 "zerocopy_threshold": 0, 00:23:27.971 "tls_version": 0, 00:23:27.971 "enable_ktls": false 00:23:27.971 } 00:23:27.971 }, 00:23:27.971 { 00:23:27.971 "method": "sock_impl_set_options", 00:23:27.971 "params": { 00:23:27.971 "impl_name": "posix", 00:23:27.971 "recv_buf_size": 2097152, 00:23:27.971 "send_buf_size": 2097152, 00:23:27.971 "enable_recv_pipe": true, 00:23:27.971 "enable_quickack": false, 00:23:27.971 "enable_placement_id": 0, 00:23:27.971 "enable_zerocopy_send_server": true, 00:23:27.971 "enable_zerocopy_send_client": false, 00:23:27.971 "zerocopy_threshold": 0, 00:23:27.971 "tls_version": 0, 00:23:27.971 "enable_ktls": false 00:23:27.971 } 00:23:27.971 } 00:23:27.971 ] 00:23:27.971 }, 00:23:27.971 { 00:23:27.971 "subsystem": "vmd", 00:23:27.971 "config": [] 00:23:27.971 }, 00:23:27.971 { 00:23:27.971 "subsystem": "accel", 00:23:27.971 "config": [ 00:23:27.971 { 00:23:27.971 "method": "accel_set_options", 00:23:27.971 "params": { 00:23:27.971 "small_cache_size": 128, 00:23:27.971 "large_cache_size": 16, 00:23:27.971 "task_count": 2048, 00:23:27.971 "sequence_count": 2048, 00:23:27.971 "buf_count": 2048 00:23:27.971 } 00:23:27.972 } 00:23:27.972 ] 00:23:27.972 }, 00:23:27.972 { 00:23:27.972 "subsystem": "bdev", 00:23:27.972 "config": [ 00:23:27.972 { 00:23:27.972 "method": "bdev_set_options", 00:23:27.972 "params": { 00:23:27.972 "bdev_io_pool_size": 65535, 00:23:27.972 "bdev_io_cache_size": 256, 00:23:27.972 "bdev_auto_examine": true, 00:23:27.972 "iobuf_small_cache_size": 128, 00:23:27.972 "iobuf_large_cache_size": 16 00:23:27.972 } 00:23:27.972 }, 00:23:27.972 { 00:23:27.972 "method": "bdev_raid_set_options", 00:23:27.972 "params": { 00:23:27.972 "process_window_size_kb": 1024, 00:23:27.972 "process_max_bandwidth_mb_sec": 0 00:23:27.972 } 00:23:27.972 }, 00:23:27.972 { 00:23:27.972 "method": "bdev_iscsi_set_options", 00:23:27.972 "params": { 00:23:27.972 "timeout_sec": 30 00:23:27.972 } 00:23:27.972 }, 00:23:27.972 { 00:23:27.972 "method": "bdev_nvme_set_options", 00:23:27.972 "params": { 00:23:27.972 "action_on_timeout": "none", 00:23:27.972 "timeout_us": 0, 00:23:27.972 "timeout_admin_us": 0, 00:23:27.972 "keep_alive_timeout_ms": 10000, 00:23:27.972 "arbitration_burst": 0, 00:23:27.972 "low_priority_weight": 0, 00:23:27.972 "medium_priority_weight": 0, 00:23:27.972 "high_priority_weight": 0, 00:23:27.972 "nvme_adminq_poll_period_us": 10000, 00:23:27.972 "nvme_ioq_poll_period_us": 0, 00:23:27.972 "io_queue_requests": 512, 00:23:27.972 "delay_cmd_submit": true, 00:23:27.972 "transport_retry_count": 4, 00:23:27.972 "bdev_retry_count": 3, 00:23:27.972 "transport_ack_timeout": 0, 00:23:27.972 "ctrlr_loss_timeout_sec": 0, 00:23:27.972 "reconnect_delay_sec": 0, 00:23:27.972 "fast_io_fail_timeout_sec": 0, 00:23:27.972 "disable_auto_failback": false, 00:23:27.972 "generate_uuids": false, 00:23:27.972 "transport_tos": 0, 00:23:27.972 "nvme_error_stat": false, 00:23:27.972 "rdma_srq_size": 0, 00:23:27.972 "io_path_stat": false, 00:23:27.972 "allow_accel_sequence": false, 00:23:27.972 "rdma_max_cq_size": 0, 00:23:27.972 "rdma_cm_event_timeout_ms": 0, 00:23:27.972 "dhchap_digests": [ 00:23:27.972 "sha256", 00:23:27.972 "sha384", 00:23:27.972 "sha512" 00:23:27.972 ], 00:23:27.972 "dhchap_dhgroups": [ 00:23:27.972 "null", 00:23:27.972 "ffdhe2048", 00:23:27.972 "ffdhe3072", 00:23:27.972 "ffdhe4096", 00:23:27.972 "ffdhe6144", 00:23:27.972 "ffdhe8192" 00:23:27.972 ] 00:23:27.972 } 00:23:27.972 }, 00:23:27.972 { 00:23:27.972 "method": "bdev_nvme_attach_controller", 00:23:27.972 "params": { 00:23:27.972 "name": "TLSTEST", 00:23:27.972 "trtype": "TCP", 00:23:27.972 "adrfam": "IPv4", 00:23:27.972 "traddr": "10.0.0.2", 00:23:27.972 "trsvcid": "4420", 00:23:27.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.972 "prchk_reftag": false, 00:23:27.972 "prchk_guard": false, 00:23:27.972 "ctrlr_loss_timeout_sec": 0, 00:23:27.972 "reconnect_delay_sec": 0, 00:23:27.972 "fast_io_fail_timeout_sec": 0, 00:23:27.972 "psk": "key0", 00:23:27.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.972 "hdgst": false, 00:23:27.972 "ddgst": false, 00:23:27.972 "multipath": "multipath" 00:23:27.972 } 00:23:27.972 }, 00:23:27.972 { 00:23:27.972 "method": "bdev_nvme_set_hotplug", 00:23:27.972 "params": { 00:23:27.972 "period_us": 100000, 00:23:27.972 "enable": false 00:23:27.972 } 00:23:27.972 }, 00:23:27.972 { 00:23:27.972 "method": "bdev_wait_for_examine" 00:23:27.972 } 00:23:27.972 ] 00:23:27.972 }, 00:23:27.972 { 00:23:27.972 "subsystem": "nbd", 00:23:27.972 "config": [] 00:23:27.972 } 00:23:27.972 ] 00:23:27.972 }' 00:23:28.232 [2024-12-07 11:35:27.343414] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:28.232 [2024-12-07 11:35:27.343528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550407 ] 00:23:28.232 [2024-12-07 11:35:27.451125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.232 [2024-12-07 11:35:27.524849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.492 [2024-12-07 11:35:27.786283] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.061 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.061 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:29.061 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:29.061 Running I/O for 10 seconds... 00:23:30.942 4636.00 IOPS, 18.11 MiB/s [2024-12-07T10:35:31.235Z] 4619.50 IOPS, 18.04 MiB/s [2024-12-07T10:35:32.617Z] 4907.00 IOPS, 19.17 MiB/s [2024-12-07T10:35:33.556Z] 5032.25 IOPS, 19.66 MiB/s [2024-12-07T10:35:34.497Z] 5010.80 IOPS, 19.57 MiB/s [2024-12-07T10:35:35.436Z] 4853.83 IOPS, 18.96 MiB/s [2024-12-07T10:35:36.374Z] 4758.00 IOPS, 18.59 MiB/s [2024-12-07T10:35:37.313Z] 4755.38 IOPS, 18.58 MiB/s [2024-12-07T10:35:38.254Z] 4663.89 IOPS, 18.22 MiB/s [2024-12-07T10:35:38.254Z] 4688.30 IOPS, 18.31 MiB/s 00:23:38.900 Latency(us) 00:23:38.900 [2024-12-07T10:35:38.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.900 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:38.900 Verification LBA range: start 0x0 length 0x2000 00:23:38.900 TLSTESTn1 : 10.01 4694.64 18.34 0.00 0.00 27229.00 5133.65 25340.59 00:23:38.900 [2024-12-07T10:35:38.254Z] =================================================================================================================== 00:23:38.900 [2024-12-07T10:35:38.254Z] Total : 4694.64 18.34 0.00 0.00 27229.00 5133.65 25340.59 00:23:38.900 { 00:23:38.900 "results": [ 00:23:38.900 { 00:23:38.900 "job": "TLSTESTn1", 00:23:38.900 "core_mask": "0x4", 00:23:38.900 "workload": "verify", 00:23:38.900 "status": "finished", 00:23:38.900 "verify_range": { 00:23:38.900 "start": 0, 00:23:38.900 "length": 8192 00:23:38.900 }, 00:23:38.900 "queue_depth": 128, 00:23:38.900 "io_size": 4096, 00:23:38.900 "runtime": 10.013554, 00:23:38.900 "iops": 4694.636889160432, 00:23:38.900 "mibps": 18.338425348282936, 00:23:38.900 "io_failed": 0, 00:23:38.900 "io_timeout": 0, 00:23:38.900 "avg_latency_us": 27228.99775877473, 00:23:38.900 "min_latency_us": 5133.653333333334, 00:23:38.900 "max_latency_us": 25340.586666666666 00:23:38.900 } 00:23:38.900 ], 00:23:38.900 "core_count": 1 00:23:38.900 } 00:23:39.160 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:39.160 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2550407 00:23:39.160 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2550407 ']' 00:23:39.160 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2550407 00:23:39.160 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.160 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.160 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2550407 00:23:39.160 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:39.160 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:39.160 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2550407' 00:23:39.160 killing process with pid 2550407 00:23:39.160 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2550407 00:23:39.160 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.160 00:23:39.160 Latency(us) 00:23:39.160 [2024-12-07T10:35:38.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.160 [2024-12-07T10:35:38.514Z] =================================================================================================================== 00:23:39.160 [2024-12-07T10:35:38.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.160 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2550407 00:23:39.728 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2550121 00:23:39.728 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2550121 ']' 00:23:39.728 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2550121 00:23:39.728 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.728 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.728 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2550121 00:23:39.728 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:39.728 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:39.728 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2550121' 00:23:39.728 killing process with pid 2550121 00:23:39.728 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2550121 00:23:39.728 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2550121 00:23:40.298 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:40.298 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.298 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.298 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.298 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2552790 00:23:40.298 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2552790 00:23:40.298 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:40.298 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2552790 ']' 00:23:40.298 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.298 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.298 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.298 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.298 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.298 [2024-12-07 11:35:39.593967] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:40.298 [2024-12-07 11:35:39.594087] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.558 [2024-12-07 11:35:39.727995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.558 [2024-12-07 11:35:39.823043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.558 [2024-12-07 11:35:39.823091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.558 [2024-12-07 11:35:39.823103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.558 [2024-12-07 11:35:39.823115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.558 [2024-12-07 11:35:39.823127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.558 [2024-12-07 11:35:39.824365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.128 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.128 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.128 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.128 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.128 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.128 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.128 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.jpsCLJTpOO 00:23:41.128 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jpsCLJTpOO 00:23:41.128 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:41.388 [2024-12-07 11:35:40.553221] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.388 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:41.648 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:41.648 [2024-12-07 11:35:40.922175] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.648 [2024-12-07 11:35:40.922471] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.648 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:41.908 malloc0 00:23:41.908 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:42.167 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jpsCLJTpOO 00:23:42.167 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:42.428 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:42.428 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2553185 00:23:42.428 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:42.428 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2553185 /var/tmp/bdevperf.sock 00:23:42.428 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2553185 ']' 00:23:42.428 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.428 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.428 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.428 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.428 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.428 [2024-12-07 11:35:41.760269] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:42.428 [2024-12-07 11:35:41.760376] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2553185 ] 00:23:42.688 [2024-12-07 11:35:41.894459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.688 [2024-12-07 11:35:41.968618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.257 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.257 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:43.257 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jpsCLJTpOO 00:23:43.516 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:43.516 [2024-12-07 11:35:42.851814] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.775 nvme0n1 00:23:43.775 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:43.775 Running I/O for 1 seconds... 00:23:44.971 5306.00 IOPS, 20.73 MiB/s 00:23:44.971 Latency(us) 00:23:44.971 [2024-12-07T10:35:44.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.971 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:44.971 Verification LBA range: start 0x0 length 0x2000 00:23:44.971 nvme0n1 : 1.03 5286.12 20.65 0.00 0.00 23949.73 5270.19 24466.77 00:23:44.971 [2024-12-07T10:35:44.325Z] =================================================================================================================== 00:23:44.971 [2024-12-07T10:35:44.325Z] Total : 5286.12 20.65 0.00 0.00 23949.73 5270.19 24466.77 00:23:44.971 { 00:23:44.971 "results": [ 00:23:44.971 { 00:23:44.971 "job": "nvme0n1", 00:23:44.971 "core_mask": "0x2", 00:23:44.971 "workload": "verify", 00:23:44.971 "status": "finished", 00:23:44.971 "verify_range": { 00:23:44.971 "start": 0, 00:23:44.971 "length": 8192 00:23:44.971 }, 00:23:44.971 "queue_depth": 128, 00:23:44.971 "io_size": 4096, 00:23:44.971 "runtime": 1.027975, 00:23:44.971 "iops": 5286.120771419538, 00:23:44.971 "mibps": 20.64890926335757, 00:23:44.971 "io_failed": 0, 00:23:44.971 "io_timeout": 0, 00:23:44.971 "avg_latency_us": 23949.725569868726, 00:23:44.971 "min_latency_us": 5270.1866666666665, 00:23:44.971 "max_latency_us": 24466.773333333334 00:23:44.971 } 00:23:44.971 ], 00:23:44.972 "core_count": 1 00:23:44.972 } 00:23:44.972 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2553185 00:23:44.972 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2553185 ']' 00:23:44.972 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2553185 00:23:44.972 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.972 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.972 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2553185 00:23:44.972 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:44.972 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:44.972 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2553185' 00:23:44.972 killing process with pid 2553185 00:23:44.972 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2553185 00:23:44.972 Received shutdown signal, test time was about 1.000000 seconds 00:23:44.972 00:23:44.972 Latency(us) 00:23:44.972 [2024-12-07T10:35:44.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.972 [2024-12-07T10:35:44.326Z] =================================================================================================================== 00:23:44.972 [2024-12-07T10:35:44.326Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.972 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2553185 00:23:45.542 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2552790 00:23:45.542 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2552790 ']' 00:23:45.542 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2552790 00:23:45.542 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:45.542 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.542 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2552790 00:23:45.542 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.542 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.542 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2552790' 00:23:45.542 killing process with pid 2552790 00:23:45.542 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2552790 00:23:45.542 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2552790 00:23:46.482 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:46.482 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.482 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.482 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.482 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2553876 00:23:46.482 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2553876 00:23:46.482 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:46.482 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2553876 ']' 00:23:46.482 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.482 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.482 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.482 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.482 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.482 [2024-12-07 11:35:45.641243] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:46.482 [2024-12-07 11:35:45.641361] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.482 [2024-12-07 11:35:45.788586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.742 [2024-12-07 11:35:45.886505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.742 [2024-12-07 11:35:45.886548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.742 [2024-12-07 11:35:45.886560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.742 [2024-12-07 11:35:45.886571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.742 [2024-12-07 11:35:45.886582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.742 [2024-12-07 11:35:45.887807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.310 [2024-12-07 11:35:46.439364] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.310 malloc0 00:23:47.310 [2024-12-07 11:35:46.485901] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.310 [2024-12-07 11:35:46.486215] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2554221 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2554221 /var/tmp/bdevperf.sock 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2554221 ']' 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:47.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:47.310 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.310 [2024-12-07 11:35:46.592193] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:47.310 [2024-12-07 11:35:46.592295] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2554221 ] 00:23:47.569 [2024-12-07 11:35:46.725910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.569 [2024-12-07 11:35:46.799553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.137 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.137 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:48.137 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jpsCLJTpOO 00:23:48.396 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:48.396 [2024-12-07 11:35:47.690320] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.655 nvme0n1 00:23:48.656 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:48.656 Running I/O for 1 seconds... 00:23:49.596 4781.00 IOPS, 18.68 MiB/s 00:23:49.596 Latency(us) 00:23:49.596 [2024-12-07T10:35:48.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.596 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:49.596 Verification LBA range: start 0x0 length 0x2000 00:23:49.596 nvme0n1 : 1.03 4782.13 18.68 0.00 0.00 26398.44 8628.91 31020.37 00:23:49.596 [2024-12-07T10:35:48.950Z] =================================================================================================================== 00:23:49.596 [2024-12-07T10:35:48.950Z] Total : 4782.13 18.68 0.00 0.00 26398.44 8628.91 31020.37 00:23:49.596 { 00:23:49.596 "results": [ 00:23:49.596 { 00:23:49.596 "job": "nvme0n1", 00:23:49.596 "core_mask": "0x2", 00:23:49.596 "workload": "verify", 00:23:49.596 "status": "finished", 00:23:49.596 "verify_range": { 00:23:49.596 "start": 0, 00:23:49.596 "length": 8192 00:23:49.596 }, 00:23:49.596 "queue_depth": 128, 00:23:49.596 "io_size": 4096, 00:23:49.596 "runtime": 1.02653, 00:23:49.596 "iops": 4782.13008874558, 00:23:49.596 "mibps": 18.680195659162422, 00:23:49.596 "io_failed": 0, 00:23:49.596 "io_timeout": 0, 00:23:49.596 "avg_latency_us": 26398.443262035715, 00:23:49.596 "min_latency_us": 8628.906666666666, 00:23:49.596 "max_latency_us": 31020.373333333333 00:23:49.596 } 00:23:49.596 ], 00:23:49.596 "core_count": 1 00:23:49.596 } 00:23:49.596 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:49.596 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.596 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.857 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.857 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:49.857 "subsystems": [ 00:23:49.857 { 00:23:49.857 "subsystem": "keyring", 00:23:49.857 "config": [ 00:23:49.857 { 00:23:49.857 "method": "keyring_file_add_key", 00:23:49.857 "params": { 00:23:49.857 "name": "key0", 00:23:49.857 "path": "/tmp/tmp.jpsCLJTpOO" 00:23:49.857 } 00:23:49.857 } 00:23:49.857 ] 00:23:49.857 }, 00:23:49.857 { 00:23:49.857 "subsystem": "iobuf", 00:23:49.857 "config": [ 00:23:49.857 { 00:23:49.857 "method": "iobuf_set_options", 00:23:49.857 "params": { 00:23:49.857 "small_pool_count": 8192, 00:23:49.857 "large_pool_count": 1024, 00:23:49.857 "small_bufsize": 8192, 00:23:49.857 "large_bufsize": 135168, 00:23:49.857 "enable_numa": false 00:23:49.857 } 00:23:49.857 } 00:23:49.857 ] 00:23:49.857 }, 00:23:49.857 { 00:23:49.857 "subsystem": "sock", 00:23:49.857 "config": [ 00:23:49.857 { 00:23:49.857 "method": "sock_set_default_impl", 00:23:49.857 "params": { 00:23:49.857 "impl_name": "posix" 00:23:49.857 } 00:23:49.857 }, 00:23:49.857 { 00:23:49.857 "method": "sock_impl_set_options", 00:23:49.857 "params": { 00:23:49.857 "impl_name": "ssl", 00:23:49.857 "recv_buf_size": 4096, 00:23:49.857 "send_buf_size": 4096, 00:23:49.857 "enable_recv_pipe": true, 00:23:49.857 "enable_quickack": false, 00:23:49.857 "enable_placement_id": 0, 00:23:49.857 "enable_zerocopy_send_server": true, 00:23:49.857 "enable_zerocopy_send_client": false, 00:23:49.857 "zerocopy_threshold": 0, 00:23:49.857 "tls_version": 0, 00:23:49.857 "enable_ktls": false 00:23:49.857 } 00:23:49.857 }, 00:23:49.857 { 00:23:49.857 "method": "sock_impl_set_options", 00:23:49.857 "params": { 00:23:49.857 "impl_name": "posix", 00:23:49.857 "recv_buf_size": 2097152, 00:23:49.857 "send_buf_size": 2097152, 00:23:49.857 "enable_recv_pipe": true, 00:23:49.857 "enable_quickack": false, 00:23:49.857 "enable_placement_id": 0, 00:23:49.857 "enable_zerocopy_send_server": true, 00:23:49.857 "enable_zerocopy_send_client": false, 00:23:49.857 "zerocopy_threshold": 0, 00:23:49.857 "tls_version": 0, 00:23:49.857 "enable_ktls": false 00:23:49.857 } 00:23:49.857 } 00:23:49.857 ] 00:23:49.857 }, 00:23:49.857 { 00:23:49.857 "subsystem": "vmd", 00:23:49.857 "config": [] 00:23:49.857 }, 00:23:49.857 { 00:23:49.857 "subsystem": "accel", 00:23:49.857 "config": [ 00:23:49.857 { 00:23:49.857 "method": "accel_set_options", 00:23:49.857 "params": { 00:23:49.857 "small_cache_size": 128, 00:23:49.857 "large_cache_size": 16, 00:23:49.857 "task_count": 2048, 00:23:49.857 "sequence_count": 2048, 00:23:49.857 "buf_count": 2048 00:23:49.857 } 00:23:49.857 } 00:23:49.857 ] 00:23:49.857 }, 00:23:49.857 { 00:23:49.857 "subsystem": "bdev", 00:23:49.857 "config": [ 00:23:49.857 { 00:23:49.857 "method": "bdev_set_options", 00:23:49.857 "params": { 00:23:49.857 "bdev_io_pool_size": 65535, 00:23:49.857 "bdev_io_cache_size": 256, 00:23:49.857 "bdev_auto_examine": true, 00:23:49.857 "iobuf_small_cache_size": 128, 00:23:49.857 "iobuf_large_cache_size": 16 00:23:49.857 } 00:23:49.857 }, 00:23:49.857 { 00:23:49.857 "method": "bdev_raid_set_options", 00:23:49.857 "params": { 00:23:49.857 "process_window_size_kb": 1024, 00:23:49.857 "process_max_bandwidth_mb_sec": 0 00:23:49.857 } 00:23:49.857 }, 00:23:49.857 { 00:23:49.857 "method": "bdev_iscsi_set_options", 00:23:49.857 "params": { 00:23:49.857 "timeout_sec": 30 00:23:49.857 } 00:23:49.857 }, 00:23:49.857 { 00:23:49.857 "method": "bdev_nvme_set_options", 00:23:49.857 "params": { 00:23:49.857 "action_on_timeout": "none", 00:23:49.857 "timeout_us": 0, 00:23:49.857 "timeout_admin_us": 0, 00:23:49.857 "keep_alive_timeout_ms": 10000, 00:23:49.857 "arbitration_burst": 0, 00:23:49.857 "low_priority_weight": 0, 00:23:49.857 "medium_priority_weight": 0, 00:23:49.857 "high_priority_weight": 0, 00:23:49.857 "nvme_adminq_poll_period_us": 10000, 00:23:49.857 "nvme_ioq_poll_period_us": 0, 00:23:49.857 "io_queue_requests": 0, 00:23:49.857 "delay_cmd_submit": true, 00:23:49.857 "transport_retry_count": 4, 00:23:49.857 "bdev_retry_count": 3, 00:23:49.858 "transport_ack_timeout": 0, 00:23:49.858 "ctrlr_loss_timeout_sec": 0, 00:23:49.858 "reconnect_delay_sec": 0, 00:23:49.858 "fast_io_fail_timeout_sec": 0, 00:23:49.858 "disable_auto_failback": false, 00:23:49.858 "generate_uuids": false, 00:23:49.858 "transport_tos": 0, 00:23:49.858 "nvme_error_stat": false, 00:23:49.858 "rdma_srq_size": 0, 00:23:49.858 "io_path_stat": false, 00:23:49.858 "allow_accel_sequence": false, 00:23:49.858 "rdma_max_cq_size": 0, 00:23:49.858 "rdma_cm_event_timeout_ms": 0, 00:23:49.858 "dhchap_digests": [ 00:23:49.858 "sha256", 00:23:49.858 "sha384", 00:23:49.858 "sha512" 00:23:49.858 ], 00:23:49.858 "dhchap_dhgroups": [ 00:23:49.858 "null", 00:23:49.858 "ffdhe2048", 00:23:49.858 "ffdhe3072", 00:23:49.858 "ffdhe4096", 00:23:49.858 "ffdhe6144", 00:23:49.858 "ffdhe8192" 00:23:49.858 ] 00:23:49.858 } 00:23:49.858 }, 00:23:49.858 { 00:23:49.858 "method": "bdev_nvme_set_hotplug", 00:23:49.858 "params": { 00:23:49.858 "period_us": 100000, 00:23:49.858 "enable": false 00:23:49.858 } 00:23:49.858 }, 00:23:49.858 { 00:23:49.858 "method": "bdev_malloc_create", 00:23:49.858 "params": { 00:23:49.858 "name": "malloc0", 00:23:49.858 "num_blocks": 8192, 00:23:49.858 "block_size": 4096, 00:23:49.858 "physical_block_size": 4096, 00:23:49.858 "uuid": "75c3d760-081d-43a3-ae36-95bc69951cb4", 00:23:49.858 "optimal_io_boundary": 0, 00:23:49.858 "md_size": 0, 00:23:49.858 "dif_type": 0, 00:23:49.858 "dif_is_head_of_md": false, 00:23:49.858 "dif_pi_format": 0 00:23:49.858 } 00:23:49.858 }, 00:23:49.858 { 00:23:49.858 "method": "bdev_wait_for_examine" 00:23:49.858 } 00:23:49.858 ] 00:23:49.858 }, 00:23:49.858 { 00:23:49.858 "subsystem": "nbd", 00:23:49.858 "config": [] 00:23:49.858 }, 00:23:49.858 { 00:23:49.858 "subsystem": "scheduler", 00:23:49.858 "config": [ 00:23:49.858 { 00:23:49.858 "method": "framework_set_scheduler", 00:23:49.858 "params": { 00:23:49.858 "name": "static" 00:23:49.858 } 00:23:49.858 } 00:23:49.858 ] 00:23:49.858 }, 00:23:49.858 { 00:23:49.858 "subsystem": "nvmf", 00:23:49.858 "config": [ 00:23:49.858 { 00:23:49.858 "method": "nvmf_set_config", 00:23:49.858 "params": { 00:23:49.858 "discovery_filter": "match_any", 00:23:49.858 "admin_cmd_passthru": { 00:23:49.858 "identify_ctrlr": false 00:23:49.858 }, 00:23:49.858 "dhchap_digests": [ 00:23:49.858 "sha256", 00:23:49.858 "sha384", 00:23:49.858 "sha512" 00:23:49.858 ], 00:23:49.858 "dhchap_dhgroups": [ 00:23:49.858 "null", 00:23:49.858 "ffdhe2048", 00:23:49.858 "ffdhe3072", 00:23:49.858 "ffdhe4096", 00:23:49.858 "ffdhe6144", 00:23:49.858 "ffdhe8192" 00:23:49.858 ] 00:23:49.858 } 00:23:49.858 }, 00:23:49.858 { 00:23:49.858 "method": "nvmf_set_max_subsystems", 00:23:49.858 "params": { 00:23:49.858 "max_subsystems": 1024 00:23:49.858 } 00:23:49.858 }, 00:23:49.858 { 00:23:49.858 "method": "nvmf_set_crdt", 00:23:49.858 "params": { 00:23:49.858 "crdt1": 0, 00:23:49.858 "crdt2": 0, 00:23:49.858 "crdt3": 0 00:23:49.858 } 00:23:49.858 }, 00:23:49.858 { 00:23:49.858 "method": "nvmf_create_transport", 00:23:49.858 "params": { 00:23:49.858 "trtype": "TCP", 00:23:49.858 "max_queue_depth": 128, 00:23:49.858 "max_io_qpairs_per_ctrlr": 127, 00:23:49.858 "in_capsule_data_size": 4096, 00:23:49.858 "max_io_size": 131072, 00:23:49.858 "io_unit_size": 131072, 00:23:49.858 "max_aq_depth": 128, 00:23:49.858 "num_shared_buffers": 511, 00:23:49.858 "buf_cache_size": 4294967295, 00:23:49.858 "dif_insert_or_strip": false, 00:23:49.858 "zcopy": false, 00:23:49.858 "c2h_success": false, 00:23:49.858 "sock_priority": 0, 00:23:49.858 "abort_timeout_sec": 1, 00:23:49.858 "ack_timeout": 0, 00:23:49.858 "data_wr_pool_size": 0 00:23:49.858 } 00:23:49.858 }, 00:23:49.858 { 00:23:49.858 "method": "nvmf_create_subsystem", 00:23:49.858 "params": { 00:23:49.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.858 "allow_any_host": false, 00:23:49.858 "serial_number": "00000000000000000000", 00:23:49.858 "model_number": "SPDK bdev Controller", 00:23:49.858 "max_namespaces": 32, 00:23:49.858 "min_cntlid": 1, 00:23:49.858 "max_cntlid": 65519, 00:23:49.858 "ana_reporting": false 00:23:49.858 } 00:23:49.858 }, 00:23:49.858 { 00:23:49.858 "method": "nvmf_subsystem_add_host", 00:23:49.858 "params": { 00:23:49.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.858 "host": "nqn.2016-06.io.spdk:host1", 00:23:49.858 "psk": "key0" 00:23:49.858 } 00:23:49.858 }, 00:23:49.858 { 00:23:49.858 "method": "nvmf_subsystem_add_ns", 00:23:49.858 "params": { 00:23:49.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.858 "namespace": { 00:23:49.858 "nsid": 1, 00:23:49.858 "bdev_name": "malloc0", 00:23:49.858 "nguid": "75C3D760081D43A3AE3695BC69951CB4", 00:23:49.858 "uuid": "75c3d760-081d-43a3-ae36-95bc69951cb4", 00:23:49.858 "no_auto_visible": false 00:23:49.858 } 00:23:49.858 } 00:23:49.858 }, 00:23:49.858 { 00:23:49.858 "method": "nvmf_subsystem_add_listener", 00:23:49.858 "params": { 00:23:49.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.858 "listen_address": { 00:23:49.858 "trtype": "TCP", 00:23:49.858 "adrfam": "IPv4", 00:23:49.858 "traddr": "10.0.0.2", 00:23:49.858 "trsvcid": "4420" 00:23:49.858 }, 00:23:49.858 "secure_channel": false, 00:23:49.858 "sock_impl": "ssl" 00:23:49.858 } 00:23:49.858 } 00:23:49.858 ] 00:23:49.858 } 00:23:49.858 ] 00:23:49.858 }' 00:23:49.858 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:50.119 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:50.119 "subsystems": [ 00:23:50.119 { 00:23:50.119 "subsystem": "keyring", 00:23:50.119 "config": [ 00:23:50.119 { 00:23:50.119 "method": "keyring_file_add_key", 00:23:50.119 "params": { 00:23:50.119 "name": "key0", 00:23:50.119 "path": "/tmp/tmp.jpsCLJTpOO" 00:23:50.119 } 00:23:50.119 } 00:23:50.119 ] 00:23:50.119 }, 00:23:50.119 { 00:23:50.119 "subsystem": "iobuf", 00:23:50.119 "config": [ 00:23:50.119 { 00:23:50.119 "method": "iobuf_set_options", 00:23:50.119 "params": { 00:23:50.119 "small_pool_count": 8192, 00:23:50.119 "large_pool_count": 1024, 00:23:50.119 "small_bufsize": 8192, 00:23:50.119 "large_bufsize": 135168, 00:23:50.119 "enable_numa": false 00:23:50.119 } 00:23:50.119 } 00:23:50.119 ] 00:23:50.119 }, 00:23:50.119 { 00:23:50.119 "subsystem": "sock", 00:23:50.119 "config": [ 00:23:50.119 { 00:23:50.119 "method": "sock_set_default_impl", 00:23:50.119 "params": { 00:23:50.119 "impl_name": "posix" 00:23:50.119 } 00:23:50.119 }, 00:23:50.119 { 00:23:50.119 "method": "sock_impl_set_options", 00:23:50.119 "params": { 00:23:50.119 "impl_name": "ssl", 00:23:50.119 "recv_buf_size": 4096, 00:23:50.119 "send_buf_size": 4096, 00:23:50.119 "enable_recv_pipe": true, 00:23:50.120 "enable_quickack": false, 00:23:50.120 "enable_placement_id": 0, 00:23:50.120 "enable_zerocopy_send_server": true, 00:23:50.120 "enable_zerocopy_send_client": false, 00:23:50.120 "zerocopy_threshold": 0, 00:23:50.120 "tls_version": 0, 00:23:50.120 "enable_ktls": false 00:23:50.120 } 00:23:50.120 }, 00:23:50.120 { 00:23:50.120 "method": "sock_impl_set_options", 00:23:50.120 "params": { 00:23:50.120 "impl_name": "posix", 00:23:50.120 "recv_buf_size": 2097152, 00:23:50.120 "send_buf_size": 2097152, 00:23:50.120 "enable_recv_pipe": true, 00:23:50.120 "enable_quickack": false, 00:23:50.120 "enable_placement_id": 0, 00:23:50.120 "enable_zerocopy_send_server": true, 00:23:50.120 "enable_zerocopy_send_client": false, 00:23:50.120 "zerocopy_threshold": 0, 00:23:50.120 "tls_version": 0, 00:23:50.120 "enable_ktls": false 00:23:50.120 } 00:23:50.120 } 00:23:50.120 ] 00:23:50.120 }, 00:23:50.120 { 00:23:50.120 "subsystem": "vmd", 00:23:50.120 "config": [] 00:23:50.120 }, 00:23:50.120 { 00:23:50.120 "subsystem": "accel", 00:23:50.120 "config": [ 00:23:50.120 { 00:23:50.120 "method": "accel_set_options", 00:23:50.120 "params": { 00:23:50.120 "small_cache_size": 128, 00:23:50.120 "large_cache_size": 16, 00:23:50.120 "task_count": 2048, 00:23:50.120 "sequence_count": 2048, 00:23:50.120 "buf_count": 2048 00:23:50.120 } 00:23:50.120 } 00:23:50.120 ] 00:23:50.120 }, 00:23:50.120 { 00:23:50.120 "subsystem": "bdev", 00:23:50.120 "config": [ 00:23:50.120 { 00:23:50.120 "method": "bdev_set_options", 00:23:50.120 "params": { 00:23:50.120 "bdev_io_pool_size": 65535, 00:23:50.120 "bdev_io_cache_size": 256, 00:23:50.120 "bdev_auto_examine": true, 00:23:50.120 "iobuf_small_cache_size": 128, 00:23:50.120 "iobuf_large_cache_size": 16 00:23:50.120 } 00:23:50.120 }, 00:23:50.120 { 00:23:50.120 "method": "bdev_raid_set_options", 00:23:50.120 "params": { 00:23:50.120 "process_window_size_kb": 1024, 00:23:50.120 "process_max_bandwidth_mb_sec": 0 00:23:50.120 } 00:23:50.120 }, 00:23:50.120 { 00:23:50.120 "method": "bdev_iscsi_set_options", 00:23:50.120 "params": { 00:23:50.120 "timeout_sec": 30 00:23:50.120 } 00:23:50.120 }, 00:23:50.120 { 00:23:50.120 "method": "bdev_nvme_set_options", 00:23:50.120 "params": { 00:23:50.120 "action_on_timeout": "none", 00:23:50.120 "timeout_us": 0, 00:23:50.120 "timeout_admin_us": 0, 00:23:50.120 "keep_alive_timeout_ms": 10000, 00:23:50.120 "arbitration_burst": 0, 00:23:50.120 "low_priority_weight": 0, 00:23:50.120 "medium_priority_weight": 0, 00:23:50.120 "high_priority_weight": 0, 00:23:50.120 "nvme_adminq_poll_period_us": 10000, 00:23:50.120 "nvme_ioq_poll_period_us": 0, 00:23:50.120 "io_queue_requests": 512, 00:23:50.120 "delay_cmd_submit": true, 00:23:50.120 "transport_retry_count": 4, 00:23:50.120 "bdev_retry_count": 3, 00:23:50.120 "transport_ack_timeout": 0, 00:23:50.120 "ctrlr_loss_timeout_sec": 0, 00:23:50.120 "reconnect_delay_sec": 0, 00:23:50.120 "fast_io_fail_timeout_sec": 0, 00:23:50.120 "disable_auto_failback": false, 00:23:50.120 "generate_uuids": false, 00:23:50.120 "transport_tos": 0, 00:23:50.120 "nvme_error_stat": false, 00:23:50.120 "rdma_srq_size": 0, 00:23:50.120 "io_path_stat": false, 00:23:50.120 "allow_accel_sequence": false, 00:23:50.120 "rdma_max_cq_size": 0, 00:23:50.120 "rdma_cm_event_timeout_ms": 0, 00:23:50.120 "dhchap_digests": [ 00:23:50.120 "sha256", 00:23:50.120 "sha384", 00:23:50.120 "sha512" 00:23:50.120 ], 00:23:50.120 "dhchap_dhgroups": [ 00:23:50.120 "null", 00:23:50.120 "ffdhe2048", 00:23:50.120 "ffdhe3072", 00:23:50.120 "ffdhe4096", 00:23:50.120 "ffdhe6144", 00:23:50.120 "ffdhe8192" 00:23:50.120 ] 00:23:50.120 } 00:23:50.120 }, 00:23:50.120 { 00:23:50.120 "method": "bdev_nvme_attach_controller", 00:23:50.120 "params": { 00:23:50.120 "name": "nvme0", 00:23:50.120 "trtype": "TCP", 00:23:50.120 "adrfam": "IPv4", 00:23:50.120 "traddr": "10.0.0.2", 00:23:50.120 "trsvcid": "4420", 00:23:50.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.120 "prchk_reftag": false, 00:23:50.120 "prchk_guard": false, 00:23:50.120 "ctrlr_loss_timeout_sec": 0, 00:23:50.120 "reconnect_delay_sec": 0, 00:23:50.120 "fast_io_fail_timeout_sec": 0, 00:23:50.120 "psk": "key0", 00:23:50.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.120 "hdgst": false, 00:23:50.120 "ddgst": false, 00:23:50.120 "multipath": "multipath" 00:23:50.120 } 00:23:50.120 }, 00:23:50.120 { 00:23:50.120 "method": "bdev_nvme_set_hotplug", 00:23:50.120 "params": { 00:23:50.120 "period_us": 100000, 00:23:50.120 "enable": false 00:23:50.120 } 00:23:50.120 }, 00:23:50.120 { 00:23:50.120 "method": "bdev_enable_histogram", 00:23:50.120 "params": { 00:23:50.120 "name": "nvme0n1", 00:23:50.120 "enable": true 00:23:50.120 } 00:23:50.120 }, 00:23:50.120 { 00:23:50.120 "method": "bdev_wait_for_examine" 00:23:50.120 } 00:23:50.120 ] 00:23:50.120 }, 00:23:50.120 { 00:23:50.120 "subsystem": "nbd", 00:23:50.120 "config": [] 00:23:50.120 } 00:23:50.120 ] 00:23:50.120 }' 00:23:50.120 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2554221 00:23:50.120 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2554221 ']' 00:23:50.120 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2554221 00:23:50.120 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:50.120 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.120 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2554221 00:23:50.120 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:50.120 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:50.120 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2554221' 00:23:50.120 killing process with pid 2554221 00:23:50.120 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2554221 00:23:50.120 Received shutdown signal, test time was about 1.000000 seconds 00:23:50.120 00:23:50.120 Latency(us) 00:23:50.120 [2024-12-07T10:35:49.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.120 [2024-12-07T10:35:49.474Z] =================================================================================================================== 00:23:50.120 [2024-12-07T10:35:49.474Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.120 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2554221 00:23:50.691 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2553876 00:23:50.691 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2553876 ']' 00:23:50.691 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2553876 00:23:50.691 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:50.691 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.691 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2553876 00:23:50.691 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:50.691 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:50.691 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2553876' 00:23:50.691 killing process with pid 2553876 00:23:50.691 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2553876 00:23:50.691 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2553876 00:23:51.632 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:51.632 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.632 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.632 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:51.632 "subsystems": [ 00:23:51.632 { 00:23:51.632 "subsystem": "keyring", 00:23:51.632 "config": [ 00:23:51.632 { 00:23:51.632 "method": "keyring_file_add_key", 00:23:51.632 "params": { 00:23:51.632 "name": "key0", 00:23:51.632 "path": "/tmp/tmp.jpsCLJTpOO" 00:23:51.632 } 00:23:51.632 } 00:23:51.632 ] 00:23:51.632 }, 00:23:51.632 { 00:23:51.632 "subsystem": "iobuf", 00:23:51.632 "config": [ 00:23:51.632 { 00:23:51.632 "method": "iobuf_set_options", 00:23:51.632 "params": { 00:23:51.632 "small_pool_count": 8192, 00:23:51.632 "large_pool_count": 1024, 00:23:51.632 "small_bufsize": 8192, 00:23:51.632 "large_bufsize": 135168, 00:23:51.632 "enable_numa": false 00:23:51.632 } 00:23:51.632 } 00:23:51.632 ] 00:23:51.632 }, 00:23:51.632 { 00:23:51.632 "subsystem": "sock", 00:23:51.632 "config": [ 00:23:51.632 { 00:23:51.632 "method": "sock_set_default_impl", 00:23:51.632 "params": { 00:23:51.632 "impl_name": "posix" 00:23:51.632 } 00:23:51.632 }, 00:23:51.632 { 00:23:51.632 "method": "sock_impl_set_options", 00:23:51.632 "params": { 00:23:51.632 "impl_name": "ssl", 00:23:51.632 "recv_buf_size": 4096, 00:23:51.632 "send_buf_size": 4096, 00:23:51.632 "enable_recv_pipe": true, 00:23:51.632 "enable_quickack": false, 00:23:51.632 "enable_placement_id": 0, 00:23:51.632 "enable_zerocopy_send_server": true, 00:23:51.632 "enable_zerocopy_send_client": false, 00:23:51.632 "zerocopy_threshold": 0, 00:23:51.632 "tls_version": 0, 00:23:51.632 "enable_ktls": false 00:23:51.632 } 00:23:51.632 }, 00:23:51.632 { 00:23:51.632 "method": "sock_impl_set_options", 00:23:51.632 "params": { 00:23:51.632 "impl_name": "posix", 00:23:51.632 "recv_buf_size": 2097152, 00:23:51.632 "send_buf_size": 2097152, 00:23:51.632 "enable_recv_pipe": true, 00:23:51.632 "enable_quickack": false, 00:23:51.632 "enable_placement_id": 0, 00:23:51.632 "enable_zerocopy_send_server": true, 00:23:51.632 "enable_zerocopy_send_client": false, 00:23:51.632 "zerocopy_threshold": 0, 00:23:51.632 "tls_version": 0, 00:23:51.632 "enable_ktls": false 00:23:51.632 } 00:23:51.632 } 00:23:51.632 ] 00:23:51.632 }, 00:23:51.632 { 00:23:51.632 "subsystem": "vmd", 00:23:51.632 "config": [] 00:23:51.632 }, 00:23:51.632 { 00:23:51.632 "subsystem": "accel", 00:23:51.632 "config": [ 00:23:51.632 { 00:23:51.632 "method": "accel_set_options", 00:23:51.632 "params": { 00:23:51.632 "small_cache_size": 128, 00:23:51.632 "large_cache_size": 16, 00:23:51.632 "task_count": 2048, 00:23:51.632 "sequence_count": 2048, 00:23:51.632 "buf_count": 2048 00:23:51.632 } 00:23:51.632 } 00:23:51.632 ] 00:23:51.632 }, 00:23:51.632 { 00:23:51.632 "subsystem": "bdev", 00:23:51.632 "config": [ 00:23:51.632 { 00:23:51.632 "method": "bdev_set_options", 00:23:51.632 "params": { 00:23:51.632 "bdev_io_pool_size": 65535, 00:23:51.632 "bdev_io_cache_size": 256, 00:23:51.632 "bdev_auto_examine": true, 00:23:51.632 "iobuf_small_cache_size": 128, 00:23:51.632 "iobuf_large_cache_size": 16 00:23:51.632 } 00:23:51.632 }, 00:23:51.632 { 00:23:51.632 "method": "bdev_raid_set_options", 00:23:51.632 "params": { 00:23:51.632 "process_window_size_kb": 1024, 00:23:51.632 "process_max_bandwidth_mb_sec": 0 00:23:51.632 } 00:23:51.632 }, 00:23:51.632 { 00:23:51.632 "method": "bdev_iscsi_set_options", 00:23:51.632 "params": { 00:23:51.632 "timeout_sec": 30 00:23:51.632 } 00:23:51.632 }, 00:23:51.632 { 00:23:51.632 "method": "bdev_nvme_set_options", 00:23:51.632 "params": { 00:23:51.632 "action_on_timeout": "none", 00:23:51.632 "timeout_us": 0, 00:23:51.632 "timeout_admin_us": 0, 00:23:51.632 "keep_alive_timeout_ms": 10000, 00:23:51.632 "arbitration_burst": 0, 00:23:51.632 "low_priority_weight": 0, 00:23:51.632 "medium_priority_weight": 0, 00:23:51.632 "high_priority_weight": 0, 00:23:51.632 "nvme_adminq_poll_period_us": 10000, 00:23:51.632 "nvme_ioq_poll_period_us": 0, 00:23:51.632 "io_queue_requests": 0, 00:23:51.632 "delay_cmd_submit": true, 00:23:51.632 "transport_retry_count": 4, 00:23:51.632 "bdev_retry_count": 3, 00:23:51.632 "transport_ack_timeout": 0, 00:23:51.632 "ctrlr_loss_timeout_sec": 0, 00:23:51.632 "reconnect_delay_sec": 0, 00:23:51.632 "fast_io_fail_timeout_sec": 0, 00:23:51.632 "disable_auto_failback": false, 00:23:51.632 "generate_uuids": false, 00:23:51.632 "transport_tos": 0, 00:23:51.632 "nvme_error_stat": false, 00:23:51.632 "rdma_srq_size": 0, 00:23:51.632 "io_path_stat": false, 00:23:51.632 "allow_accel_sequence": false, 00:23:51.632 "rdma_max_cq_size": 0, 00:23:51.632 "rdma_cm_event_timeout_ms": 0, 00:23:51.632 "dhchap_digests": [ 00:23:51.632 "sha256", 00:23:51.632 "sha384", 00:23:51.632 "sha512" 00:23:51.632 ], 00:23:51.632 "dhchap_dhgroups": [ 00:23:51.632 "null", 00:23:51.632 "ffdhe2048", 00:23:51.632 "ffdhe3072", 00:23:51.632 "ffdhe4096", 00:23:51.632 "ffdhe6144", 00:23:51.632 "ffdhe8192" 00:23:51.632 ] 00:23:51.632 } 00:23:51.632 }, 00:23:51.632 { 00:23:51.632 "method": "bdev_nvme_set_hotplug", 00:23:51.632 "params": { 00:23:51.632 "period_us": 100000, 00:23:51.632 "enable": false 00:23:51.633 } 00:23:51.633 }, 00:23:51.633 { 00:23:51.633 "method": "bdev_malloc_create", 00:23:51.633 "params": { 00:23:51.633 "name": "malloc0", 00:23:51.633 "num_blocks": 8192, 00:23:51.633 "block_size": 4096, 00:23:51.633 "physical_block_size": 4096, 00:23:51.633 "uuid": "75c3d760-081d-43a3-ae36-95bc69951cb4", 00:23:51.633 "optimal_io_boundary": 0, 00:23:51.633 "md_size": 0, 00:23:51.633 "dif_type": 0, 00:23:51.633 "dif_is_head_of_md": false, 00:23:51.633 "dif_pi_format": 0 00:23:51.633 } 00:23:51.633 }, 00:23:51.633 { 00:23:51.633 "method": "bdev_wait_for_examine" 00:23:51.633 } 00:23:51.633 ] 00:23:51.633 }, 00:23:51.633 { 00:23:51.633 "subsystem": "nbd", 00:23:51.633 "config": [] 00:23:51.633 }, 00:23:51.633 { 00:23:51.633 "subsystem": "scheduler", 00:23:51.633 "config": [ 00:23:51.633 { 00:23:51.633 "method": "framework_set_scheduler", 00:23:51.633 "params": { 00:23:51.633 "name": "static" 00:23:51.633 } 00:23:51.633 } 00:23:51.633 ] 00:23:51.633 }, 00:23:51.633 { 00:23:51.633 "subsystem": "nvmf", 00:23:51.633 "config": [ 00:23:51.633 { 00:23:51.633 "method": "nvmf_set_config", 00:23:51.633 "params": { 00:23:51.633 "discovery_filter": "match_any", 00:23:51.633 "admin_cmd_passthru": { 00:23:51.633 "identify_ctrlr": false 00:23:51.633 }, 00:23:51.633 "dhchap_digests": [ 00:23:51.633 "sha256", 00:23:51.633 "sha384", 00:23:51.633 "sha512" 00:23:51.633 ], 00:23:51.633 "dhchap_dhgroups": [ 00:23:51.633 "null", 00:23:51.633 "ffdhe2048", 00:23:51.633 "ffdhe3072", 00:23:51.633 "ffdhe4096", 00:23:51.633 "ffdhe6144", 00:23:51.633 "ffdhe8192" 00:23:51.633 ] 00:23:51.633 } 00:23:51.633 }, 00:23:51.633 { 00:23:51.633 "method": "nvmf_set_max_subsystems", 00:23:51.633 "params": { 00:23:51.633 "max_subsystems": 1024 00:23:51.633 } 00:23:51.633 }, 00:23:51.633 { 00:23:51.633 "method": "nvmf_set_crdt", 00:23:51.633 "params": { 00:23:51.633 "crdt1": 0, 00:23:51.633 "crdt2": 0, 00:23:51.633 "crdt3": 0 00:23:51.633 } 00:23:51.633 }, 00:23:51.633 { 00:23:51.633 "method": "nvmf_create_transport", 00:23:51.633 "params": { 00:23:51.633 "trtype": "TCP", 00:23:51.633 "max_queue_depth": 128, 00:23:51.633 "max_io_qpairs_per_ctrlr": 127, 00:23:51.633 "in_capsule_data_size": 4096, 00:23:51.633 "max_io_size": 131072, 00:23:51.633 "io_unit_size": 131072, 00:23:51.633 "max_aq_depth": 128, 00:23:51.633 "num_shared_buffers": 511, 00:23:51.633 "buf_cache_size": 4294967295, 00:23:51.633 "dif_insert_or_strip": false, 00:23:51.633 "zcopy": false, 00:23:51.633 "c2h_success": false, 00:23:51.633 "sock_priority": 0, 00:23:51.633 "abort_timeout_sec": 1, 00:23:51.633 "ack_timeout": 0, 00:23:51.633 "data_wr_pool_size": 0 00:23:51.633 } 00:23:51.633 }, 00:23:51.633 { 00:23:51.633 "method": "nvmf_create_subsystem", 00:23:51.633 "params": { 00:23:51.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.633 "allow_any_host": false, 00:23:51.633 "serial_number": "00000000000000000000", 00:23:51.633 "model_number": "SPDK bdev Controller", 00:23:51.633 "max_namespaces": 32, 00:23:51.633 "min_cntlid": 1, 00:23:51.633 "max_cntlid": 65519, 00:23:51.633 "ana_reporting": false 00:23:51.633 } 00:23:51.633 }, 00:23:51.633 { 00:23:51.633 "method": "nvmf_subsystem_add_host", 00:23:51.633 "params": { 00:23:51.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.633 "host": "nqn.2016-06.io.spdk:host1", 00:23:51.633 "psk": "key0" 00:23:51.633 } 00:23:51.633 }, 00:23:51.633 { 00:23:51.633 "method": "nvmf_subsystem_add_ns", 00:23:51.633 "params": { 00:23:51.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.633 "namespace": { 00:23:51.633 "nsid": 1, 00:23:51.633 "bdev_name": "malloc0", 00:23:51.633 "nguid": "75C3D760081D43A3AE3695BC69951CB4", 00:23:51.633 "uuid": "75c3d760-081d-43a3-ae36-95bc69951cb4", 00:23:51.633 "no_auto_visible": false 00:23:51.633 } 00:23:51.633 } 00:23:51.633 }, 00:23:51.633 { 00:23:51.633 "method": "nvmf_subsystem_add_listener", 00:23:51.633 "params": { 00:23:51.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.633 "listen_address": { 00:23:51.633 "trtype": "TCP", 00:23:51.633 "adrfam": "IPv4", 00:23:51.633 "traddr": "10.0.0.2", 00:23:51.633 "trsvcid": "4420" 00:23:51.633 }, 00:23:51.633 "secure_channel": false, 00:23:51.633 "sock_impl": "ssl" 00:23:51.633 } 00:23:51.633 } 00:23:51.633 ] 00:23:51.633 } 00:23:51.633 ] 00:23:51.633 }' 00:23:51.633 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.633 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2554919 00:23:51.633 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2554919 00:23:51.633 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:51.633 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2554919 ']' 00:23:51.633 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.633 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.633 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.633 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.633 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.633 [2024-12-07 11:35:50.816807] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:51.633 [2024-12-07 11:35:50.816913] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.633 [2024-12-07 11:35:50.965273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.894 [2024-12-07 11:35:51.064681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.894 [2024-12-07 11:35:51.064730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.894 [2024-12-07 11:35:51.064742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.894 [2024-12-07 11:35:51.064753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.894 [2024-12-07 11:35:51.064765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.894 [2024-12-07 11:35:51.066037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.155 [2024-12-07 11:35:51.473349] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.155 [2024-12-07 11:35:51.505376] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:52.155 [2024-12-07 11:35:51.505661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2555240 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2555240 /var/tmp/bdevperf.sock 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2555240 ']' 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.416 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:52.416 "subsystems": [ 00:23:52.416 { 00:23:52.416 "subsystem": "keyring", 00:23:52.416 "config": [ 00:23:52.416 { 00:23:52.416 "method": "keyring_file_add_key", 00:23:52.416 "params": { 00:23:52.416 "name": "key0", 00:23:52.416 "path": "/tmp/tmp.jpsCLJTpOO" 00:23:52.416 } 00:23:52.416 } 00:23:52.416 ] 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "subsystem": "iobuf", 00:23:52.416 "config": [ 00:23:52.416 { 00:23:52.416 "method": "iobuf_set_options", 00:23:52.416 "params": { 00:23:52.416 "small_pool_count": 8192, 00:23:52.416 "large_pool_count": 1024, 00:23:52.416 "small_bufsize": 8192, 00:23:52.416 "large_bufsize": 135168, 00:23:52.416 "enable_numa": false 00:23:52.416 } 00:23:52.416 } 00:23:52.416 ] 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "subsystem": "sock", 00:23:52.416 "config": [ 00:23:52.416 { 00:23:52.416 "method": "sock_set_default_impl", 00:23:52.416 "params": { 00:23:52.416 "impl_name": "posix" 00:23:52.416 } 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "method": "sock_impl_set_options", 00:23:52.416 "params": { 00:23:52.416 "impl_name": "ssl", 00:23:52.416 "recv_buf_size": 4096, 00:23:52.416 "send_buf_size": 4096, 00:23:52.416 "enable_recv_pipe": true, 00:23:52.416 "enable_quickack": false, 00:23:52.416 "enable_placement_id": 0, 00:23:52.416 "enable_zerocopy_send_server": true, 00:23:52.416 "enable_zerocopy_send_client": false, 00:23:52.416 "zerocopy_threshold": 0, 00:23:52.416 "tls_version": 0, 00:23:52.416 "enable_ktls": false 00:23:52.416 } 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "method": "sock_impl_set_options", 00:23:52.416 "params": { 00:23:52.416 "impl_name": "posix", 00:23:52.416 "recv_buf_size": 2097152, 00:23:52.416 "send_buf_size": 2097152, 00:23:52.416 "enable_recv_pipe": true, 00:23:52.416 "enable_quickack": false, 00:23:52.416 "enable_placement_id": 0, 00:23:52.416 "enable_zerocopy_send_server": true, 00:23:52.416 "enable_zerocopy_send_client": false, 00:23:52.416 "zerocopy_threshold": 0, 00:23:52.416 "tls_version": 0, 00:23:52.416 "enable_ktls": false 00:23:52.416 } 00:23:52.416 } 00:23:52.416 ] 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "subsystem": "vmd", 00:23:52.416 "config": [] 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "subsystem": "accel", 00:23:52.416 "config": [ 00:23:52.416 { 00:23:52.416 "method": "accel_set_options", 00:23:52.416 "params": { 00:23:52.416 "small_cache_size": 128, 00:23:52.416 "large_cache_size": 16, 00:23:52.416 "task_count": 2048, 00:23:52.416 "sequence_count": 2048, 00:23:52.416 "buf_count": 2048 00:23:52.416 } 00:23:52.416 } 00:23:52.416 ] 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "subsystem": "bdev", 00:23:52.416 "config": [ 00:23:52.416 { 00:23:52.416 "method": "bdev_set_options", 00:23:52.416 "params": { 00:23:52.416 "bdev_io_pool_size": 65535, 00:23:52.416 "bdev_io_cache_size": 256, 00:23:52.416 "bdev_auto_examine": true, 00:23:52.416 "iobuf_small_cache_size": 128, 00:23:52.416 "iobuf_large_cache_size": 16 00:23:52.416 } 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "method": "bdev_raid_set_options", 00:23:52.416 "params": { 00:23:52.416 "process_window_size_kb": 1024, 00:23:52.416 "process_max_bandwidth_mb_sec": 0 00:23:52.416 } 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "method": "bdev_iscsi_set_options", 00:23:52.416 "params": { 00:23:52.416 "timeout_sec": 30 00:23:52.416 } 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "method": "bdev_nvme_set_options", 00:23:52.416 "params": { 00:23:52.416 "action_on_timeout": "none", 00:23:52.416 "timeout_us": 0, 00:23:52.416 "timeout_admin_us": 0, 00:23:52.416 "keep_alive_timeout_ms": 10000, 00:23:52.416 "arbitration_burst": 0, 00:23:52.416 "low_priority_weight": 0, 00:23:52.416 "medium_priority_weight": 0, 00:23:52.416 "high_priority_weight": 0, 00:23:52.416 "nvme_adminq_poll_period_us": 10000, 00:23:52.416 "nvme_ioq_poll_period_us": 0, 00:23:52.416 "io_queue_requests": 512, 00:23:52.416 "delay_cmd_submit": true, 00:23:52.416 "transport_retry_count": 4, 00:23:52.416 "bdev_retry_count": 3, 00:23:52.416 "transport_ack_timeout": 0, 00:23:52.416 "ctrlr_loss_timeout_sec": 0, 00:23:52.416 "reconnect_delay_sec": 0, 00:23:52.416 "fast_io_fail_timeout_sec": 0, 00:23:52.416 "disable_auto_failback": false, 00:23:52.416 "generate_uuids": false, 00:23:52.416 "transport_tos": 0, 00:23:52.416 "nvme_error_stat": false, 00:23:52.416 "rdma_srq_size": 0, 00:23:52.416 "io_path_stat": false, 00:23:52.416 "allow_accel_sequence": false, 00:23:52.416 "rdma_max_cq_size": 0, 00:23:52.416 "rdma_cm_event_timeout_ms": 0, 00:23:52.416 "dhchap_digests": [ 00:23:52.416 "sha256", 00:23:52.416 "sha384", 00:23:52.416 "sha512" 00:23:52.416 ], 00:23:52.416 "dhchap_dhgroups": [ 00:23:52.416 "null", 00:23:52.416 "ffdhe2048", 00:23:52.416 "ffdhe3072", 00:23:52.416 "ffdhe4096", 00:23:52.416 "ffdhe6144", 00:23:52.416 "ffdhe8192" 00:23:52.416 ] 00:23:52.416 } 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "method": "bdev_nvme_attach_controller", 00:23:52.416 "params": { 00:23:52.416 "name": "nvme0", 00:23:52.416 "trtype": "TCP", 00:23:52.416 "adrfam": "IPv4", 00:23:52.416 "traddr": "10.0.0.2", 00:23:52.416 "trsvcid": "4420", 00:23:52.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.416 "prchk_reftag": false, 00:23:52.416 "prchk_guard": false, 00:23:52.416 "ctrlr_loss_timeout_sec": 0, 00:23:52.416 "reconnect_delay_sec": 0, 00:23:52.416 "fast_io_fail_timeout_sec": 0, 00:23:52.416 "psk": "key0", 00:23:52.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.416 "hdgst": false, 00:23:52.416 "ddgst": false, 00:23:52.416 "multipath": "multipath" 00:23:52.416 } 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "method": "bdev_nvme_set_hotplug", 00:23:52.416 "params": { 00:23:52.416 "period_us": 100000, 00:23:52.416 "enable": false 00:23:52.416 } 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "method": "bdev_enable_histogram", 00:23:52.416 "params": { 00:23:52.416 "name": "nvme0n1", 00:23:52.416 "enable": true 00:23:52.416 } 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "method": "bdev_wait_for_examine" 00:23:52.416 } 00:23:52.416 ] 00:23:52.416 }, 00:23:52.416 { 00:23:52.416 "subsystem": "nbd", 00:23:52.416 "config": [] 00:23:52.416 } 00:23:52.416 ] 00:23:52.416 }' 00:23:52.416 [2024-12-07 11:35:51.706884] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:52.416 [2024-12-07 11:35:51.706995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2555240 ] 00:23:52.676 [2024-12-07 11:35:51.840451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.676 [2024-12-07 11:35:51.914832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.935 [2024-12-07 11:35:52.178521] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.195 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.195 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:53.195 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:53.195 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:53.454 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.454 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:53.454 Running I/O for 1 seconds... 00:23:54.394 4045.00 IOPS, 15.80 MiB/s 00:23:54.394 Latency(us) 00:23:54.394 [2024-12-07T10:35:53.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.394 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:54.394 Verification LBA range: start 0x0 length 0x2000 00:23:54.394 nvme0n1 : 1.02 4094.47 15.99 0.00 0.00 30959.26 7263.57 36918.61 00:23:54.394 [2024-12-07T10:35:53.748Z] =================================================================================================================== 00:23:54.394 [2024-12-07T10:35:53.748Z] Total : 4094.47 15.99 0.00 0.00 30959.26 7263.57 36918.61 00:23:54.654 { 00:23:54.654 "results": [ 00:23:54.654 { 00:23:54.654 "job": "nvme0n1", 00:23:54.654 "core_mask": "0x2", 00:23:54.654 "workload": "verify", 00:23:54.654 "status": "finished", 00:23:54.654 "verify_range": { 00:23:54.654 "start": 0, 00:23:54.654 "length": 8192 00:23:54.654 }, 00:23:54.654 "queue_depth": 128, 00:23:54.654 "io_size": 4096, 00:23:54.654 "runtime": 1.019423, 00:23:54.654 "iops": 4094.4730499508055, 00:23:54.654 "mibps": 15.994035351370334, 00:23:54.654 "io_failed": 0, 00:23:54.654 "io_timeout": 0, 00:23:54.654 "avg_latency_us": 30959.263708672734, 00:23:54.654 "min_latency_us": 7263.573333333334, 00:23:54.654 "max_latency_us": 36918.613333333335 00:23:54.654 } 00:23:54.654 ], 00:23:54.654 "core_count": 1 00:23:54.654 } 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:54.654 nvmf_trace.0 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2555240 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2555240 ']' 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2555240 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.654 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2555240 00:23:54.655 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:54.655 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:54.655 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2555240' 00:23:54.655 killing process with pid 2555240 00:23:54.655 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2555240 00:23:54.655 Received shutdown signal, test time was about 1.000000 seconds 00:23:54.655 00:23:54.655 Latency(us) 00:23:54.655 [2024-12-07T10:35:54.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.655 [2024-12-07T10:35:54.009Z] =================================================================================================================== 00:23:54.655 [2024-12-07T10:35:54.009Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.655 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2555240 00:23:55.223 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:55.223 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:55.223 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:55.223 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:55.223 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:55.223 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:55.223 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:55.223 rmmod nvme_tcp 00:23:55.223 rmmod nvme_fabrics 00:23:55.224 rmmod nvme_keyring 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2554919 ']' 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2554919 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2554919 ']' 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2554919 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2554919 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2554919' 00:23:55.224 killing process with pid 2554919 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2554919 00:23:55.224 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2554919 00:23:56.163 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:56.163 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:56.163 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:56.163 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:56.163 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:56.163 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:56.163 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:56.163 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:56.163 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:56.163 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.163 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.163 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.lmh7uKvQQb /tmp/tmp.TEqxtPofw4 /tmp/tmp.jpsCLJTpOO 00:23:58.699 00:23:58.699 real 1m37.419s 00:23:58.699 user 2m31.403s 00:23:58.699 sys 0m28.476s 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.699 ************************************ 00:23:58.699 END TEST nvmf_tls 00:23:58.699 ************************************ 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:58.699 ************************************ 00:23:58.699 START TEST nvmf_fips 00:23:58.699 ************************************ 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:58.699 * Looking for test storage... 00:23:58.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:58.699 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:58.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.700 --rc genhtml_branch_coverage=1 00:23:58.700 --rc genhtml_function_coverage=1 00:23:58.700 --rc genhtml_legend=1 00:23:58.700 --rc geninfo_all_blocks=1 00:23:58.700 --rc geninfo_unexecuted_blocks=1 00:23:58.700 00:23:58.700 ' 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:58.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.700 --rc genhtml_branch_coverage=1 00:23:58.700 --rc genhtml_function_coverage=1 00:23:58.700 --rc genhtml_legend=1 00:23:58.700 --rc geninfo_all_blocks=1 00:23:58.700 --rc geninfo_unexecuted_blocks=1 00:23:58.700 00:23:58.700 ' 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:58.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.700 --rc genhtml_branch_coverage=1 00:23:58.700 --rc genhtml_function_coverage=1 00:23:58.700 --rc genhtml_legend=1 00:23:58.700 --rc geninfo_all_blocks=1 00:23:58.700 --rc geninfo_unexecuted_blocks=1 00:23:58.700 00:23:58.700 ' 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:58.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.700 --rc genhtml_branch_coverage=1 00:23:58.700 --rc genhtml_function_coverage=1 00:23:58.700 --rc genhtml_legend=1 00:23:58.700 --rc geninfo_all_blocks=1 00:23:58.700 --rc geninfo_unexecuted_blocks=1 00:23:58.700 00:23:58.700 ' 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:58.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:58.700 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:58.701 Error setting digest 00:23:58.701 4062BEFB797F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:58.701 4062BEFB797F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:58.701 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.842 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:06.843 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:06.843 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:06.843 Found net devices under 0000:31:00.0: cvl_0_0 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:06.843 Found net devices under 0000:31:00.1: cvl_0_1 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:06.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:24:06.843 00:24:06.843 --- 10.0.0.2 ping statistics --- 00:24:06.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.843 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:24:06.843 00:24:06.843 --- 10.0.0.1 ping statistics --- 00:24:06.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.843 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2560283 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2560283 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2560283 ']' 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.843 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:06.843 [2024-12-07 11:36:05.727487] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:24:06.844 [2024-12-07 11:36:05.727618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.844 [2024-12-07 11:36:05.893765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.844 [2024-12-07 11:36:06.018842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.844 [2024-12-07 11:36:06.018914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.844 [2024-12-07 11:36:06.018928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.844 [2024-12-07 11:36:06.018944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.844 [2024-12-07 11:36:06.018954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.844 [2024-12-07 11:36:06.020452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.bic 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.bic 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.bic 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.bic 00:24:07.423 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:07.423 [2024-12-07 11:36:06.685676] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.423 [2024-12-07 11:36:06.701647] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:07.423 [2024-12-07 11:36:06.702099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.423 malloc0 00:24:07.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:07.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2560502 00:24:07.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2560502 /var/tmp/bdevperf.sock 00:24:07.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2560502 ']' 00:24:07.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:07.684 [2024-12-07 11:36:06.908243] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:24:07.684 [2024-12-07 11:36:06.908371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2560502 ] 00:24:07.684 [2024-12-07 11:36:07.033177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.945 [2024-12-07 11:36:07.110856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.518 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.518 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:08.518 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.bic 00:24:08.518 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:08.779 [2024-12-07 11:36:07.985423] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.779 TLSTESTn1 00:24:08.779 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:09.040 Running I/O for 10 seconds... 00:24:11.019 4575.00 IOPS, 17.87 MiB/s [2024-12-07T10:36:11.315Z] 4149.50 IOPS, 16.21 MiB/s [2024-12-07T10:36:12.256Z] 4339.33 IOPS, 16.95 MiB/s [2024-12-07T10:36:13.200Z] 4563.00 IOPS, 17.82 MiB/s [2024-12-07T10:36:14.582Z] 4664.20 IOPS, 18.22 MiB/s [2024-12-07T10:36:15.521Z] 4502.00 IOPS, 17.59 MiB/s [2024-12-07T10:36:16.461Z] 4455.29 IOPS, 17.40 MiB/s [2024-12-07T10:36:17.401Z] 4539.38 IOPS, 17.73 MiB/s [2024-12-07T10:36:18.340Z] 4482.67 IOPS, 17.51 MiB/s [2024-12-07T10:36:18.340Z] 4492.10 IOPS, 17.55 MiB/s 00:24:18.986 Latency(us) 00:24:18.986 [2024-12-07T10:36:18.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.986 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:18.986 Verification LBA range: start 0x0 length 0x2000 00:24:18.986 TLSTESTn1 : 10.02 4494.26 17.56 0.00 0.00 28434.85 7973.55 55050.24 00:24:18.986 [2024-12-07T10:36:18.340Z] =================================================================================================================== 00:24:18.986 [2024-12-07T10:36:18.340Z] Total : 4494.26 17.56 0.00 0.00 28434.85 7973.55 55050.24 00:24:18.986 { 00:24:18.986 "results": [ 00:24:18.986 { 00:24:18.986 "job": "TLSTESTn1", 00:24:18.986 "core_mask": "0x4", 00:24:18.986 "workload": "verify", 00:24:18.986 "status": "finished", 00:24:18.986 "verify_range": { 00:24:18.986 "start": 0, 00:24:18.986 "length": 8192 00:24:18.986 }, 00:24:18.986 "queue_depth": 128, 00:24:18.986 "io_size": 4096, 00:24:18.986 "runtime": 10.023673, 00:24:18.986 "iops": 4494.260736558345, 00:24:18.986 "mibps": 17.555706002181036, 00:24:18.986 "io_failed": 0, 00:24:18.986 "io_timeout": 0, 00:24:18.986 "avg_latency_us": 28434.85345941826, 00:24:18.986 "min_latency_us": 7973.546666666667, 00:24:18.986 "max_latency_us": 55050.24 00:24:18.986 } 00:24:18.986 ], 00:24:18.986 "core_count": 1 00:24:18.986 } 00:24:18.986 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:18.986 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:18.986 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:18.986 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:18.986 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:18.986 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:18.986 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:18.986 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:18.986 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:18.986 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:18.986 nvmf_trace.0 00:24:19.246 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:19.246 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2560502 00:24:19.246 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2560502 ']' 00:24:19.246 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2560502 00:24:19.246 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:19.246 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.246 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2560502 00:24:19.246 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:19.246 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:19.246 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2560502' 00:24:19.246 killing process with pid 2560502 00:24:19.246 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2560502 00:24:19.246 Received shutdown signal, test time was about 10.000000 seconds 00:24:19.246 00:24:19.246 Latency(us) 00:24:19.246 [2024-12-07T10:36:18.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.246 [2024-12-07T10:36:18.600Z] =================================================================================================================== 00:24:19.246 [2024-12-07T10:36:18.600Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.246 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2560502 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:19.819 rmmod nvme_tcp 00:24:19.819 rmmod nvme_fabrics 00:24:19.819 rmmod nvme_keyring 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2560283 ']' 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2560283 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2560283 ']' 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2560283 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.819 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2560283 00:24:19.819 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:19.819 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:19.819 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2560283' 00:24:19.819 killing process with pid 2560283 00:24:19.819 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2560283 00:24:19.819 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2560283 00:24:20.389 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:20.389 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:20.389 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:20.389 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:20.389 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:20.389 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:20.389 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:20.389 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.389 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:20.389 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.389 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.389 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.934 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:22.934 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.bic 00:24:22.934 00:24:22.934 real 0m24.215s 00:24:22.934 user 0m25.883s 00:24:22.934 sys 0m9.943s 00:24:22.934 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:22.934 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:22.934 ************************************ 00:24:22.934 END TEST nvmf_fips 00:24:22.934 ************************************ 00:24:22.934 11:36:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:22.934 11:36:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:22.934 11:36:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:22.935 11:36:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:22.935 ************************************ 00:24:22.935 START TEST nvmf_control_msg_list 00:24:22.935 ************************************ 00:24:22.935 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:22.935 * Looking for test storage... 00:24:22.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:22.935 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:22.935 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:22.935 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:22.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.935 --rc genhtml_branch_coverage=1 00:24:22.935 --rc genhtml_function_coverage=1 00:24:22.935 --rc genhtml_legend=1 00:24:22.935 --rc geninfo_all_blocks=1 00:24:22.935 --rc geninfo_unexecuted_blocks=1 00:24:22.935 00:24:22.935 ' 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:22.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.935 --rc genhtml_branch_coverage=1 00:24:22.935 --rc genhtml_function_coverage=1 00:24:22.935 --rc genhtml_legend=1 00:24:22.935 --rc geninfo_all_blocks=1 00:24:22.935 --rc geninfo_unexecuted_blocks=1 00:24:22.935 00:24:22.935 ' 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:22.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.935 --rc genhtml_branch_coverage=1 00:24:22.935 --rc genhtml_function_coverage=1 00:24:22.935 --rc genhtml_legend=1 00:24:22.935 --rc geninfo_all_blocks=1 00:24:22.935 --rc geninfo_unexecuted_blocks=1 00:24:22.935 00:24:22.935 ' 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:22.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.935 --rc genhtml_branch_coverage=1 00:24:22.935 --rc genhtml_function_coverage=1 00:24:22.935 --rc genhtml_legend=1 00:24:22.935 --rc geninfo_all_blocks=1 00:24:22.935 --rc geninfo_unexecuted_blocks=1 00:24:22.935 00:24:22.935 ' 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:22.935 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:22.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:22.936 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:31.079 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:31.079 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:31.079 Found net devices under 0000:31:00.0: cvl_0_0 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:31.079 Found net devices under 0000:31:00.1: cvl_0_1 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:31.079 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:31.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:24:31.080 00:24:31.080 --- 10.0.0.2 ping statistics --- 00:24:31.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.080 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:24:31.080 00:24:31.080 --- 10.0.0.1 ping statistics --- 00:24:31.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.080 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2567693 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2567693 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2567693 ']' 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.080 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:31.080 [2024-12-07 11:36:29.788005] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:24:31.080 [2024-12-07 11:36:29.788143] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.080 [2024-12-07 11:36:29.936665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.080 [2024-12-07 11:36:30.035858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.080 [2024-12-07 11:36:30.035903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.080 [2024-12-07 11:36:30.035915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.080 [2024-12-07 11:36:30.035927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.080 [2024-12-07 11:36:30.035938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.080 [2024-12-07 11:36:30.037131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:31.341 [2024-12-07 11:36:30.585939] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:31.341 Malloc0 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:31.341 [2024-12-07 11:36:30.656929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2568005 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2568007 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2568009 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2568005 00:24:31.341 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:31.602 [2024-12-07 11:36:30.767863] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:31.602 [2024-12-07 11:36:30.798245] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:31.602 [2024-12-07 11:36:30.798637] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:32.542 Initializing NVMe Controllers 00:24:32.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:32.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:32.542 Initialization complete. Launching workers. 00:24:32.542 ======================================================== 00:24:32.542 Latency(us) 00:24:32.542 Device Information : IOPS MiB/s Average min max 00:24:32.542 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 24.00 0.09 41894.25 41766.29 41951.88 00:24:32.542 ======================================================== 00:24:32.542 Total : 24.00 0.09 41894.25 41766.29 41951.88 00:24:32.542 00:24:32.803 Initializing NVMe Controllers 00:24:32.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:32.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:32.803 Initialization complete. Launching workers. 00:24:32.803 ======================================================== 00:24:32.803 Latency(us) 00:24:32.803 Device Information : IOPS MiB/s Average min max 00:24:32.803 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41183.33 40842.80 47365.28 00:24:32.803 ======================================================== 00:24:32.803 Total : 25.00 0.10 41183.33 40842.80 47365.28 00:24:32.803 00:24:32.803 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2568007 00:24:32.803 Initializing NVMe Controllers 00:24:32.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:32.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:32.803 Initialization complete. Launching workers. 00:24:32.803 ======================================================== 00:24:32.803 Latency(us) 00:24:32.803 Device Information : IOPS MiB/s Average min max 00:24:32.803 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1546.00 6.04 646.57 218.59 924.39 00:24:32.803 ======================================================== 00:24:32.803 Total : 1546.00 6.04 646.57 218.59 924.39 00:24:32.803 00:24:32.803 [2024-12-07 11:36:32.000833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2568009 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:32.803 rmmod nvme_tcp 00:24:32.803 rmmod nvme_fabrics 00:24:32.803 rmmod nvme_keyring 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2567693 ']' 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2567693 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2567693 ']' 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2567693 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.803 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2567693 00:24:33.064 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.064 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.064 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2567693' 00:24:33.064 killing process with pid 2567693 00:24:33.064 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2567693 00:24:33.064 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2567693 00:24:34.003 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:34.003 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:34.003 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:34.003 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:34.003 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:34.003 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:34.003 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:34.003 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:34.003 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:34.003 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.003 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.003 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.953 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:35.953 00:24:35.953 real 0m13.252s 00:24:35.953 user 0m8.694s 00:24:35.953 sys 0m6.775s 00:24:35.953 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:35.953 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:35.954 ************************************ 00:24:35.954 END TEST nvmf_control_msg_list 00:24:35.954 ************************************ 00:24:35.954 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:35.954 11:36:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:35.954 11:36:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:35.954 11:36:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:35.954 ************************************ 00:24:35.954 START TEST nvmf_wait_for_buf 00:24:35.954 ************************************ 00:24:35.954 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:35.954 * Looking for test storage... 00:24:35.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:35.954 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:35.954 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:35.954 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:36.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.214 --rc genhtml_branch_coverage=1 00:24:36.214 --rc genhtml_function_coverage=1 00:24:36.214 --rc genhtml_legend=1 00:24:36.214 --rc geninfo_all_blocks=1 00:24:36.214 --rc geninfo_unexecuted_blocks=1 00:24:36.214 00:24:36.214 ' 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:36.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.214 --rc genhtml_branch_coverage=1 00:24:36.214 --rc genhtml_function_coverage=1 00:24:36.214 --rc genhtml_legend=1 00:24:36.214 --rc geninfo_all_blocks=1 00:24:36.214 --rc geninfo_unexecuted_blocks=1 00:24:36.214 00:24:36.214 ' 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:36.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.214 --rc genhtml_branch_coverage=1 00:24:36.214 --rc genhtml_function_coverage=1 00:24:36.214 --rc genhtml_legend=1 00:24:36.214 --rc geninfo_all_blocks=1 00:24:36.214 --rc geninfo_unexecuted_blocks=1 00:24:36.214 00:24:36.214 ' 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:36.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.214 --rc genhtml_branch_coverage=1 00:24:36.214 --rc genhtml_function_coverage=1 00:24:36.214 --rc genhtml_legend=1 00:24:36.214 --rc geninfo_all_blocks=1 00:24:36.214 --rc geninfo_unexecuted_blocks=1 00:24:36.214 00:24:36.214 ' 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.214 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:36.215 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:44.351 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:44.352 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:44.352 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:44.352 Found net devices under 0000:31:00.0: cvl_0_0 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:44.352 Found net devices under 0000:31:00.1: cvl_0_1 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:44.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:24:44.352 00:24:44.352 --- 10.0.0.2 ping statistics --- 00:24:44.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.352 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:24:44.352 00:24:44.352 --- 10.0.0.1 ping statistics --- 00:24:44.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.352 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:44.352 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:44.353 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.353 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2572477 00:24:44.353 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2572477 00:24:44.353 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:44.353 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2572477 ']' 00:24:44.353 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.353 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.353 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.353 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.353 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.353 [2024-12-07 11:36:42.699162] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:24:44.353 [2024-12-07 11:36:42.699307] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.353 [2024-12-07 11:36:42.849407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.353 [2024-12-07 11:36:42.948043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.353 [2024-12-07 11:36:42.948090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.353 [2024-12-07 11:36:42.948105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.353 [2024-12-07 11:36:42.948121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.353 [2024-12-07 11:36:42.948136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.353 [2024-12-07 11:36:42.949416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.353 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.613 Malloc0 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.613 [2024-12-07 11:36:43.753914] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.613 [2024-12-07 11:36:43.790215] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.613 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:44.613 [2024-12-07 11:36:43.929541] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:45.996 Initializing NVMe Controllers 00:24:45.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:45.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:45.996 Initialization complete. Launching workers. 00:24:45.996 ======================================================== 00:24:45.996 Latency(us) 00:24:45.996 Device Information : IOPS MiB/s Average min max 00:24:45.996 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 125.00 15.62 33290.05 23977.21 63857.05 00:24:45.996 ======================================================== 00:24:45.996 Total : 125.00 15.62 33290.05 23977.21 63857.05 00:24:45.996 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.255 rmmod nvme_tcp 00:24:46.255 rmmod nvme_fabrics 00:24:46.255 rmmod nvme_keyring 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2572477 ']' 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2572477 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2572477 ']' 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2572477 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2572477 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2572477' 00:24:46.255 killing process with pid 2572477 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2572477 00:24:46.255 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2572477 00:24:47.195 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.195 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.195 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.195 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:47.195 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:47.195 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.195 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.195 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.195 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:47.195 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.195 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.195 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.109 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.109 00:24:49.109 real 0m13.187s 00:24:49.109 user 0m5.631s 00:24:49.109 sys 0m6.093s 00:24:49.109 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:49.109 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:49.109 ************************************ 00:24:49.109 END TEST nvmf_wait_for_buf 00:24:49.109 ************************************ 00:24:49.109 11:36:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:49.109 11:36:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:49.109 11:36:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:49.109 11:36:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:49.109 11:36:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:49.109 ************************************ 00:24:49.109 START TEST nvmf_fuzz 00:24:49.109 ************************************ 00:24:49.109 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:49.371 * Looking for test storage... 00:24:49.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:49.371 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:49.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.372 --rc genhtml_branch_coverage=1 00:24:49.372 --rc genhtml_function_coverage=1 00:24:49.372 --rc genhtml_legend=1 00:24:49.372 --rc geninfo_all_blocks=1 00:24:49.372 --rc geninfo_unexecuted_blocks=1 00:24:49.372 00:24:49.372 ' 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:49.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.372 --rc genhtml_branch_coverage=1 00:24:49.372 --rc genhtml_function_coverage=1 00:24:49.372 --rc genhtml_legend=1 00:24:49.372 --rc geninfo_all_blocks=1 00:24:49.372 --rc geninfo_unexecuted_blocks=1 00:24:49.372 00:24:49.372 ' 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:49.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.372 --rc genhtml_branch_coverage=1 00:24:49.372 --rc genhtml_function_coverage=1 00:24:49.372 --rc genhtml_legend=1 00:24:49.372 --rc geninfo_all_blocks=1 00:24:49.372 --rc geninfo_unexecuted_blocks=1 00:24:49.372 00:24:49.372 ' 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:49.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.372 --rc genhtml_branch_coverage=1 00:24:49.372 --rc genhtml_function_coverage=1 00:24:49.372 --rc genhtml_legend=1 00:24:49.372 --rc geninfo_all_blocks=1 00:24:49.372 --rc geninfo_unexecuted_blocks=1 00:24:49.372 00:24:49.372 ' 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.372 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.509 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:57.510 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:57.510 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:57.510 Found net devices under 0000:31:00.0: cvl_0_0 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:57.510 Found net devices under 0000:31:00.1: cvl_0_1 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.510 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.511 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.511 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.511 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:24:57.511 00:24:57.511 --- 10.0.0.2 ping statistics --- 00:24:57.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.511 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:24:57.511 00:24:57.511 --- 10.0.0.1 ping statistics --- 00:24:57.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.511 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2577529 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2577529 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2577529 ']' 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.511 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:57.772 Malloc0 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.772 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:58.032 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.032 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:58.032 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:30.141 Fuzzing completed. Shutting down the fuzz application 00:25:30.141 00:25:30.141 Dumping successful admin opcodes: 00:25:30.141 9, 10, 00:25:30.141 Dumping successful io opcodes: 00:25:30.141 0, 9, 00:25:30.141 NS: 0x2000008efec0 I/O qp, Total commands completed: 812043, total successful commands: 4719, random_seed: 3948832512 00:25:30.141 NS: 0x2000008efec0 admin qp, Total commands completed: 102608, total successful commands: 24, random_seed: 3084748480 00:25:30.141 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:30.141 Fuzzing completed. Shutting down the fuzz application 00:25:30.141 00:25:30.141 Dumping successful admin opcodes: 00:25:30.141 00:25:30.141 Dumping successful io opcodes: 00:25:30.141 00:25:30.141 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1411097717 00:25:30.141 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 1411212571 00:25:30.141 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.141 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.141 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:30.141 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.141 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:30.141 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:30.141 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:30.141 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:30.141 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:30.141 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:30.141 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:30.141 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:30.141 rmmod nvme_tcp 00:25:30.141 rmmod nvme_fabrics 00:25:30.401 rmmod nvme_keyring 00:25:30.401 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:30.401 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:30.401 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:30.401 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 2577529 ']' 00:25:30.401 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 2577529 00:25:30.401 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2577529 ']' 00:25:30.401 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 2577529 00:25:30.401 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:30.401 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.401 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2577529 00:25:30.401 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:30.402 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:30.402 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2577529' 00:25:30.402 killing process with pid 2577529 00:25:30.402 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 2577529 00:25:30.402 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 2577529 00:25:31.345 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:31.345 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:31.345 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:31.345 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:31.345 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:31.345 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:31.345 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:31.345 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:31.345 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:31.345 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.345 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.345 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.256 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:33.256 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:33.516 00:25:33.517 real 0m44.215s 00:25:33.517 user 0m59.814s 00:25:33.517 sys 0m14.554s 00:25:33.517 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:33.517 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:33.517 ************************************ 00:25:33.517 END TEST nvmf_fuzz 00:25:33.517 ************************************ 00:25:33.517 11:37:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:33.517 11:37:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:33.517 11:37:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:33.517 11:37:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:33.517 ************************************ 00:25:33.517 START TEST nvmf_multiconnection 00:25:33.517 ************************************ 00:25:33.517 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:33.517 * Looking for test storage... 00:25:33.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:33.517 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:33.517 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:33.517 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:33.778 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:33.778 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:33.778 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:33.778 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:33.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.779 --rc genhtml_branch_coverage=1 00:25:33.779 --rc genhtml_function_coverage=1 00:25:33.779 --rc genhtml_legend=1 00:25:33.779 --rc geninfo_all_blocks=1 00:25:33.779 --rc geninfo_unexecuted_blocks=1 00:25:33.779 00:25:33.779 ' 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:33.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.779 --rc genhtml_branch_coverage=1 00:25:33.779 --rc genhtml_function_coverage=1 00:25:33.779 --rc genhtml_legend=1 00:25:33.779 --rc geninfo_all_blocks=1 00:25:33.779 --rc geninfo_unexecuted_blocks=1 00:25:33.779 00:25:33.779 ' 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:33.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.779 --rc genhtml_branch_coverage=1 00:25:33.779 --rc genhtml_function_coverage=1 00:25:33.779 --rc genhtml_legend=1 00:25:33.779 --rc geninfo_all_blocks=1 00:25:33.779 --rc geninfo_unexecuted_blocks=1 00:25:33.779 00:25:33.779 ' 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:33.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.779 --rc genhtml_branch_coverage=1 00:25:33.779 --rc genhtml_function_coverage=1 00:25:33.779 --rc genhtml_legend=1 00:25:33.779 --rc geninfo_all_blocks=1 00:25:33.779 --rc geninfo_unexecuted_blocks=1 00:25:33.779 00:25:33.779 ' 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:33.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:33.780 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:42.049 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:42.049 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.049 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:42.050 Found net devices under 0000:31:00.0: cvl_0_0 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:42.050 Found net devices under 0000:31:00.1: cvl_0_1 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:42.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:25:42.050 00:25:42.050 --- 10.0.0.2 ping statistics --- 00:25:42.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.050 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:25:42.050 00:25:42.050 --- 10.0.0.1 ping statistics --- 00:25:42.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.050 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=2588272 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 2588272 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 2588272 ']' 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:42.050 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.050 [2024-12-07 11:37:40.556637] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:25:42.050 [2024-12-07 11:37:40.556738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.050 [2024-12-07 11:37:40.674579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.050 [2024-12-07 11:37:40.776063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.050 [2024-12-07 11:37:40.776107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.050 [2024-12-07 11:37:40.776119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.050 [2024-12-07 11:37:40.776130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.050 [2024-12-07 11:37:40.776139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.050 [2024-12-07 11:37:40.782046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.050 [2024-12-07 11:37:40.782125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.050 [2024-12-07 11:37:40.782451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.050 [2024-12-07 11:37:40.782470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.050 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:42.050 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:42.050 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:42.050 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:42.050 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.050 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.050 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:42.050 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.050 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.050 [2024-12-07 11:37:41.397812] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.312 Malloc1 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.312 [2024-12-07 11:37:41.525904] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.312 Malloc2 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.312 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.573 Malloc3 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.573 Malloc4 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.573 Malloc5 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:42.573 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.574 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.834 Malloc6 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.834 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:42.835 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.835 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.835 Malloc7 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.835 Malloc8 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.835 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.096 Malloc9 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:43.096 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.097 Malloc10 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.097 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.097 Malloc11 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.358 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:44.744 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:44.744 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:44.744 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:44.744 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:44.744 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:46.656 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:46.656 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:46.656 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:46.656 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:46.656 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:46.656 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:46.656 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:46.656 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:48.564 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:48.564 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:48.564 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.564 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:48.564 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:50.473 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:50.473 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:50.473 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:50.473 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:50.473 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:50.473 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:50.473 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.473 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:51.856 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:51.856 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:51.857 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:51.857 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:51.857 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:53.766 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:53.766 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:53.766 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:53.766 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:53.766 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.766 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:53.766 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.766 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:55.690 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:55.690 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:55.690 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:55.690 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:55.690 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:57.602 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:57.602 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:57.602 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:57.602 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:57.602 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.602 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:57.602 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.602 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:59.511 11:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:59.511 11:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:59.511 11:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:59.511 11:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:59.511 11:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:01.425 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:01.425 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:01.425 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:01.425 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:01.425 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:01.425 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:01.425 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:01.425 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:02.809 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:02.809 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:02.810 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:02.810 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:02.810 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:05.355 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:05.355 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:05.355 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:05.355 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:05.355 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:05.355 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:05.355 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.355 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:06.737 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:06.737 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:06.737 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:06.737 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:06.737 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:08.648 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:08.648 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:08.648 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:08.648 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:08.648 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:08.648 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:08.648 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.648 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:10.561 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:10.561 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:10.561 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:10.561 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:10.561 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:12.473 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:12.473 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:12.473 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:12.473 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:12.473 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:12.473 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:12.473 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.473 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:14.387 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:14.387 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:14.387 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:14.387 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:14.387 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:16.409 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:16.409 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:16.409 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:16.409 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:16.409 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:16.409 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:16.409 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.409 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:18.319 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:18.319 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:18.319 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.319 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:18.319 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:20.235 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:20.235 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:20.235 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:20.235 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:20.235 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.235 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:20.235 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.235 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:22.146 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:22.146 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:22.146 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:22.146 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:22.146 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:24.061 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:24.061 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:24.061 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:24.061 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:24.061 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:24.061 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:24.061 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:24.061 [global] 00:26:24.061 thread=1 00:26:24.061 invalidate=1 00:26:24.061 rw=read 00:26:24.061 time_based=1 00:26:24.061 runtime=10 00:26:24.061 ioengine=libaio 00:26:24.061 direct=1 00:26:24.061 bs=262144 00:26:24.061 iodepth=64 00:26:24.061 norandommap=1 00:26:24.061 numjobs=1 00:26:24.061 00:26:24.061 [job0] 00:26:24.061 filename=/dev/nvme0n1 00:26:24.061 [job1] 00:26:24.061 filename=/dev/nvme10n1 00:26:24.061 [job2] 00:26:24.061 filename=/dev/nvme1n1 00:26:24.061 [job3] 00:26:24.061 filename=/dev/nvme2n1 00:26:24.061 [job4] 00:26:24.061 filename=/dev/nvme3n1 00:26:24.061 [job5] 00:26:24.061 filename=/dev/nvme4n1 00:26:24.061 [job6] 00:26:24.061 filename=/dev/nvme5n1 00:26:24.061 [job7] 00:26:24.061 filename=/dev/nvme6n1 00:26:24.061 [job8] 00:26:24.061 filename=/dev/nvme7n1 00:26:24.061 [job9] 00:26:24.061 filename=/dev/nvme8n1 00:26:24.061 [job10] 00:26:24.061 filename=/dev/nvme9n1 00:26:24.347 Could not set queue depth (nvme0n1) 00:26:24.347 Could not set queue depth (nvme10n1) 00:26:24.347 Could not set queue depth (nvme1n1) 00:26:24.347 Could not set queue depth (nvme2n1) 00:26:24.347 Could not set queue depth (nvme3n1) 00:26:24.347 Could not set queue depth (nvme4n1) 00:26:24.347 Could not set queue depth (nvme5n1) 00:26:24.347 Could not set queue depth (nvme6n1) 00:26:24.347 Could not set queue depth (nvme7n1) 00:26:24.347 Could not set queue depth (nvme8n1) 00:26:24.347 Could not set queue depth (nvme9n1) 00:26:24.611 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.611 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.611 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.611 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.612 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.612 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.612 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.612 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.612 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.612 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.612 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.612 fio-3.35 00:26:24.612 Starting 11 threads 00:26:36.844 00:26:36.844 job0: (groupid=0, jobs=1): err= 0: pid=2596796: Sat Dec 7 11:38:34 2024 00:26:36.844 read: IOPS=290, BW=72.7MiB/s (76.2MB/s)(736MiB/10127msec) 00:26:36.844 slat (usec): min=11, max=776159, avg=3124.74, stdev=18323.32 00:26:36.844 clat (msec): min=17, max=1217, avg=216.73, stdev=192.16 00:26:36.844 lat (msec): min=18, max=1217, avg=219.85, stdev=194.15 00:26:36.844 clat percentiles (msec): 00:26:36.844 | 1.00th=[ 35], 5.00th=[ 62], 10.00th=[ 74], 20.00th=[ 115], 00:26:36.844 | 30.00th=[ 131], 40.00th=[ 140], 50.00th=[ 150], 60.00th=[ 161], 00:26:36.844 | 70.00th=[ 178], 80.00th=[ 296], 90.00th=[ 472], 95.00th=[ 575], 00:26:36.844 | 99.00th=[ 1099], 99.50th=[ 1133], 99.90th=[ 1150], 99.95th=[ 1150], 00:26:36.844 | 99.99th=[ 1217] 00:26:36.844 bw ( KiB/s): min=15872, max=132096, per=10.77%, avg=77608.42, stdev=39708.64, samples=19 00:26:36.844 iops : min= 62, max= 516, avg=303.16, stdev=155.11, samples=19 00:26:36.844 lat (msec) : 20=0.34%, 50=2.04%, 100=12.98%, 250=61.51%, 500=14.37% 00:26:36.844 lat (msec) : 750=5.64%, 1000=1.43%, 2000=1.70% 00:26:36.844 cpu : usr=0.11%, sys=0.91%, ctx=543, majf=0, minf=4097 00:26:36.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:36.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:36.844 issued rwts: total=2944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.844 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:36.844 job1: (groupid=0, jobs=1): err= 0: pid=2596813: Sat Dec 7 11:38:34 2024 00:26:36.844 read: IOPS=159, BW=39.8MiB/s (41.7MB/s)(403MiB/10132msec) 00:26:36.844 slat (usec): min=12, max=283069, avg=4669.81, stdev=18946.30 00:26:36.844 clat (msec): min=16, max=971, avg=397.35, stdev=191.79 00:26:36.844 lat (msec): min=16, max=971, avg=402.02, stdev=193.20 00:26:36.844 clat percentiles (msec): 00:26:36.844 | 1.00th=[ 20], 5.00th=[ 61], 10.00th=[ 167], 20.00th=[ 222], 00:26:36.844 | 30.00th=[ 305], 40.00th=[ 351], 50.00th=[ 388], 60.00th=[ 439], 00:26:36.844 | 70.00th=[ 485], 80.00th=[ 542], 90.00th=[ 634], 95.00th=[ 718], 00:26:36.844 | 99.00th=[ 919], 99.50th=[ 953], 99.90th=[ 969], 99.95th=[ 969], 00:26:36.844 | 99.99th=[ 969] 00:26:36.844 bw ( KiB/s): min=15872, max=94396, per=5.50%, avg=39612.60, stdev=17856.75, samples=20 00:26:36.844 iops : min= 62, max= 368, avg=154.70, stdev=69.63, samples=20 00:26:36.844 lat (msec) : 20=1.12%, 50=3.60%, 100=1.61%, 250=16.33%, 500=49.53% 00:26:36.844 lat (msec) : 750=23.34%, 1000=4.47% 00:26:36.844 cpu : usr=0.07%, sys=0.58%, ctx=274, majf=0, minf=4098 00:26:36.844 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:26:36.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.844 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:36.844 issued rwts: total=1611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.844 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:36.844 job2: (groupid=0, jobs=1): err= 0: pid=2596834: Sat Dec 7 11:38:34 2024 00:26:36.844 read: IOPS=228, BW=57.2MiB/s (60.0MB/s)(576MiB/10067msec) 00:26:36.844 slat (usec): min=10, max=132251, avg=4343.87, stdev=13703.51 00:26:36.844 clat (msec): min=15, max=654, avg=274.73, stdev=145.04 00:26:36.844 lat (msec): min=16, max=654, avg=279.07, stdev=147.09 00:26:36.844 clat percentiles (msec): 00:26:36.844 | 1.00th=[ 61], 5.00th=[ 75], 10.00th=[ 91], 20.00th=[ 124], 00:26:36.844 | 30.00th=[ 197], 40.00th=[ 236], 50.00th=[ 255], 60.00th=[ 288], 00:26:36.844 | 70.00th=[ 338], 80.00th=[ 414], 90.00th=[ 493], 95.00th=[ 550], 00:26:36.844 | 99.00th=[ 609], 99.50th=[ 634], 99.90th=[ 634], 99.95th=[ 634], 00:26:36.844 | 99.99th=[ 651] 00:26:36.844 bw ( KiB/s): min=20992, max=160256, per=7.96%, avg=57351.20, stdev=32472.01, samples=20 00:26:36.844 iops : min= 82, max= 626, avg=224.00, stdev=126.83, samples=20 00:26:36.844 lat (msec) : 20=0.22%, 50=0.13%, 100=14.41%, 250=32.07%, 500=43.66% 00:26:36.844 lat (msec) : 750=9.51% 00:26:36.844 cpu : usr=0.06%, sys=0.91%, ctx=388, majf=0, minf=4097 00:26:36.844 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:36.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:36.844 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.844 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:36.844 job3: (groupid=0, jobs=1): err= 0: pid=2596846: Sat Dec 7 11:38:34 2024 00:26:36.844 read: IOPS=182, BW=45.7MiB/s (47.9MB/s)(459MiB/10053msec) 00:26:36.844 slat (usec): min=11, max=242097, avg=4051.68, stdev=17257.91 00:26:36.844 clat (msec): min=21, max=927, avg=345.86, stdev=220.35 00:26:36.844 lat (msec): min=21, max=927, avg=349.91, stdev=222.55 00:26:36.844 clat percentiles (msec): 00:26:36.844 | 1.00th=[ 29], 5.00th=[ 42], 10.00th=[ 52], 20.00th=[ 125], 00:26:36.844 | 30.00th=[ 215], 40.00th=[ 239], 50.00th=[ 271], 60.00th=[ 405], 00:26:36.844 | 70.00th=[ 506], 80.00th=[ 575], 90.00th=[ 651], 95.00th=[ 701], 00:26:36.844 | 99.00th=[ 827], 99.50th=[ 877], 99.90th=[ 927], 99.95th=[ 927], 00:26:36.844 | 99.99th=[ 927] 00:26:36.844 bw ( KiB/s): min=14818, max=137728, per=6.30%, avg=45412.90, stdev=30491.72, samples=20 00:26:36.844 iops : min= 57, max= 538, avg=177.35, stdev=119.16, samples=20 00:26:36.844 lat (msec) : 50=9.36%, 100=6.80%, 250=27.71%, 500=25.26%, 750=28.14% 00:26:36.844 lat (msec) : 1000=2.72% 00:26:36.844 cpu : usr=0.07%, sys=0.64%, ctx=364, majf=0, minf=4097 00:26:36.844 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:36.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.844 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:36.844 issued rwts: total=1837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.844 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:36.844 job4: (groupid=0, jobs=1): err= 0: pid=2596852: Sat Dec 7 11:38:34 2024 00:26:36.844 read: IOPS=186, BW=46.5MiB/s (48.8MB/s)(471MiB/10118msec) 00:26:36.844 slat (usec): min=11, max=447887, avg=2994.81, stdev=17641.29 00:26:36.844 clat (msec): min=7, max=869, avg=340.16, stdev=146.10 00:26:36.844 lat (msec): min=7, max=1271, avg=343.16, stdev=148.04 00:26:36.844 clat percentiles (msec): 00:26:36.844 | 1.00th=[ 43], 5.00th=[ 126], 10.00th=[ 176], 20.00th=[ 241], 00:26:36.845 | 30.00th=[ 279], 40.00th=[ 300], 50.00th=[ 317], 60.00th=[ 330], 00:26:36.845 | 70.00th=[ 372], 80.00th=[ 430], 90.00th=[ 550], 95.00th=[ 634], 00:26:36.845 | 99.00th=[ 827], 99.50th=[ 835], 99.90th=[ 852], 99.95th=[ 869], 00:26:36.845 | 99.99th=[ 869] 00:26:36.845 bw ( KiB/s): min=15872, max=87040, per=6.47%, avg=46617.60, stdev=14420.90, samples=20 00:26:36.845 iops : min= 62, max= 340, avg=182.10, stdev=56.33, samples=20 00:26:36.845 lat (msec) : 10=0.05%, 20=0.21%, 50=0.90%, 100=1.17%, 250=20.17% 00:26:36.845 lat (msec) : 500=63.22%, 750=12.31%, 1000=1.96% 00:26:36.845 cpu : usr=0.09%, sys=0.66%, ctx=373, majf=0, minf=4097 00:26:36.845 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.845 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:36.845 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:36.845 job5: (groupid=0, jobs=1): err= 0: pid=2596866: Sat Dec 7 11:38:34 2024 00:26:36.845 read: IOPS=138, BW=34.7MiB/s (36.4MB/s)(351MiB/10121msec) 00:26:36.845 slat (usec): min=15, max=488845, avg=7018.06, stdev=26147.10 00:26:36.845 clat (msec): min=12, max=1065, avg=453.47, stdev=233.39 00:26:36.845 lat (msec): min=13, max=1065, avg=460.49, stdev=236.41 00:26:36.845 clat percentiles (msec): 00:26:36.845 | 1.00th=[ 23], 5.00th=[ 44], 10.00th=[ 51], 20.00th=[ 279], 00:26:36.845 | 30.00th=[ 397], 40.00th=[ 451], 50.00th=[ 502], 60.00th=[ 558], 00:26:36.845 | 70.00th=[ 584], 80.00th=[ 634], 90.00th=[ 693], 95.00th=[ 793], 00:26:36.845 | 99.00th=[ 885], 99.50th=[ 894], 99.90th=[ 995], 99.95th=[ 1070], 00:26:36.845 | 99.99th=[ 1070] 00:26:36.845 bw ( KiB/s): min=12288, max=127488, per=4.76%, avg=34329.60, stdev=22977.62, samples=20 00:26:36.845 iops : min= 48, max= 498, avg=134.10, stdev=89.76, samples=20 00:26:36.845 lat (msec) : 20=0.71%, 50=8.54%, 100=9.61%, 250=0.78%, 500=30.39% 00:26:36.845 lat (msec) : 750=41.92%, 1000=7.97%, 2000=0.07% 00:26:36.845 cpu : usr=0.05%, sys=0.57%, ctx=223, majf=0, minf=4097 00:26:36.845 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:26:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.845 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:36.845 issued rwts: total=1405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:36.845 job6: (groupid=0, jobs=1): err= 0: pid=2596878: Sat Dec 7 11:38:34 2024 00:26:36.845 read: IOPS=214, BW=53.5MiB/s (56.1MB/s)(540MiB/10088msec) 00:26:36.845 slat (usec): min=12, max=366216, avg=3284.67, stdev=15181.39 00:26:36.845 clat (msec): min=3, max=918, avg=295.37, stdev=176.89 00:26:36.845 lat (msec): min=3, max=1103, avg=298.66, stdev=179.25 00:26:36.845 clat percentiles (msec): 00:26:36.845 | 1.00th=[ 17], 5.00th=[ 41], 10.00th=[ 112], 20.00th=[ 174], 00:26:36.845 | 30.00th=[ 197], 40.00th=[ 211], 50.00th=[ 232], 60.00th=[ 292], 00:26:36.845 | 70.00th=[ 359], 80.00th=[ 443], 90.00th=[ 558], 95.00th=[ 617], 00:26:36.845 | 99.00th=[ 835], 99.50th=[ 860], 99.90th=[ 894], 99.95th=[ 894], 00:26:36.845 | 99.99th=[ 919] 00:26:36.845 bw ( KiB/s): min=26112, max=105472, per=7.45%, avg=53690.30, stdev=23881.14, samples=20 00:26:36.845 iops : min= 102, max= 412, avg=209.70, stdev=93.26, samples=20 00:26:36.845 lat (msec) : 4=0.05%, 10=0.23%, 20=1.81%, 50=3.47%, 100=2.92% 00:26:36.845 lat (msec) : 250=46.48%, 500=29.31%, 750=13.33%, 1000=2.41% 00:26:36.845 cpu : usr=0.07%, sys=0.76%, ctx=406, majf=0, minf=4097 00:26:36.845 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:36.845 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:36.845 job7: (groupid=0, jobs=1): err= 0: pid=2596887: Sat Dec 7 11:38:34 2024 00:26:36.845 read: IOPS=807, BW=202MiB/s (212MB/s)(2044MiB/10120msec) 00:26:36.845 slat (usec): min=9, max=54711, avg=1220.53, stdev=4195.02 00:26:36.845 clat (msec): min=10, max=412, avg=77.85, stdev=63.73 00:26:36.845 lat (msec): min=12, max=412, avg=79.07, stdev=64.70 00:26:36.845 clat percentiles (msec): 00:26:36.845 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 39], 00:26:36.845 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 44], 60.00th=[ 51], 00:26:36.845 | 70.00th=[ 96], 80.00th=[ 121], 90.00th=[ 142], 95.00th=[ 243], 00:26:36.845 | 99.00th=[ 313], 99.50th=[ 334], 99.90th=[ 376], 99.95th=[ 384], 00:26:36.845 | 99.99th=[ 414] 00:26:36.845 bw ( KiB/s): min=51200, max=435712, per=28.81%, avg=207651.45, stdev=140721.01, samples=20 00:26:36.845 iops : min= 200, max= 1702, avg=811.10, stdev=549.71, samples=20 00:26:36.845 lat (msec) : 20=0.22%, 50=59.66%, 100=12.20%, 250=23.16%, 500=4.77% 00:26:36.845 cpu : usr=0.33%, sys=2.63%, ctx=1048, majf=0, minf=3534 00:26:36.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:36.845 issued rwts: total=8175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:36.845 job8: (groupid=0, jobs=1): err= 0: pid=2596916: Sat Dec 7 11:38:34 2024 00:26:36.845 read: IOPS=320, BW=80.1MiB/s (84.0MB/s)(809MiB/10102msec) 00:26:36.845 slat (usec): min=12, max=339294, avg=2627.24, stdev=15928.21 00:26:36.845 clat (usec): min=1611, max=1026.8k, avg=196827.16, stdev=238150.15 00:26:36.845 lat (usec): min=1659, max=1026.9k, avg=199454.39, stdev=241268.80 00:26:36.845 clat percentiles (msec): 00:26:36.845 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 10], 20.00th=[ 19], 00:26:36.845 | 30.00th=[ 58], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 89], 00:26:36.845 | 70.00th=[ 96], 80.00th=[ 451], 90.00th=[ 584], 95.00th=[ 701], 00:26:36.845 | 99.00th=[ 894], 99.50th=[ 953], 99.90th=[ 1028], 99.95th=[ 1028], 00:26:36.845 | 99.99th=[ 1028] 00:26:36.845 bw ( KiB/s): min= 9216, max=238592, per=11.27%, avg=81199.50, stdev=78515.87, samples=20 00:26:36.845 iops : min= 36, max= 932, avg=317.15, stdev=306.72, samples=20 00:26:36.845 lat (msec) : 2=0.22%, 4=3.21%, 10=6.89%, 20=11.34%, 50=6.80% 00:26:36.845 lat (msec) : 100=41.75%, 250=1.95%, 500=12.61%, 750=12.21%, 1000=2.78% 00:26:36.845 lat (msec) : 2000=0.25% 00:26:36.845 cpu : usr=0.15%, sys=1.22%, ctx=1207, majf=0, minf=4097 00:26:36.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:36.845 issued rwts: total=3236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:36.845 job9: (groupid=0, jobs=1): err= 0: pid=2596934: Sat Dec 7 11:38:34 2024 00:26:36.845 read: IOPS=160, BW=40.1MiB/s (42.1MB/s)(406MiB/10123msec) 00:26:36.845 slat (usec): min=12, max=282679, avg=5223.70, stdev=19272.12 00:26:36.845 clat (msec): min=16, max=882, avg=392.81, stdev=180.58 00:26:36.845 lat (msec): min=16, max=994, avg=398.03, stdev=183.31 00:26:36.845 clat percentiles (msec): 00:26:36.845 | 1.00th=[ 36], 5.00th=[ 104], 10.00th=[ 150], 20.00th=[ 253], 00:26:36.845 | 30.00th=[ 288], 40.00th=[ 313], 50.00th=[ 380], 60.00th=[ 456], 00:26:36.845 | 70.00th=[ 489], 80.00th=[ 558], 90.00th=[ 634], 95.00th=[ 701], 00:26:36.845 | 99.00th=[ 793], 99.50th=[ 810], 99.90th=[ 885], 99.95th=[ 885], 00:26:36.845 | 99.99th=[ 885] 00:26:36.845 bw ( KiB/s): min=18944, max=82432, per=5.55%, avg=39961.60, stdev=14775.32, samples=20 00:26:36.845 iops : min= 74, max= 322, avg=156.10, stdev=57.72, samples=20 00:26:36.845 lat (msec) : 20=0.31%, 50=0.92%, 100=3.51%, 250=14.95%, 500=52.49% 00:26:36.845 lat (msec) : 750=25.05%, 1000=2.77% 00:26:36.845 cpu : usr=0.06%, sys=0.57%, ctx=328, majf=0, minf=4097 00:26:36.845 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:26:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.845 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:36.845 issued rwts: total=1625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:36.845 job10: (groupid=0, jobs=1): err= 0: pid=2596945: Sat Dec 7 11:38:34 2024 00:26:36.845 read: IOPS=132, BW=33.1MiB/s (34.7MB/s)(335MiB/10117msec) 00:26:36.845 slat (usec): min=13, max=349767, avg=7471.76, stdev=22841.17 00:26:36.845 clat (msec): min=40, max=924, avg=474.75, stdev=139.49 00:26:36.845 lat (msec): min=40, max=924, avg=482.22, stdev=141.55 00:26:36.845 clat percentiles (msec): 00:26:36.845 | 1.00th=[ 124], 5.00th=[ 279], 10.00th=[ 313], 20.00th=[ 368], 00:26:36.845 | 30.00th=[ 409], 40.00th=[ 443], 50.00th=[ 468], 60.00th=[ 502], 00:26:36.845 | 70.00th=[ 542], 80.00th=[ 575], 90.00th=[ 651], 95.00th=[ 760], 00:26:36.845 | 99.00th=[ 827], 99.50th=[ 835], 99.90th=[ 852], 99.95th=[ 927], 00:26:36.845 | 99.99th=[ 927] 00:26:36.845 bw ( KiB/s): min=20992, max=49152, per=4.54%, avg=32691.20, stdev=8153.64, samples=20 00:26:36.845 iops : min= 82, max= 192, avg=127.70, stdev=31.85, samples=20 00:26:36.845 lat (msec) : 50=0.45%, 100=0.22%, 250=2.76%, 500=56.30%, 750=34.38% 00:26:36.845 lat (msec) : 1000=5.89% 00:26:36.845 cpu : usr=0.04%, sys=0.58%, ctx=229, majf=0, minf=4097 00:26:36.845 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:26:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.845 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:36.845 issued rwts: total=1341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:36.845 00:26:36.845 Run status group 0 (all jobs): 00:26:36.845 READ: bw=704MiB/s (738MB/s), 33.1MiB/s-202MiB/s (34.7MB/s-212MB/s), io=7131MiB (7477MB), run=10053-10132msec 00:26:36.845 00:26:36.845 Disk stats (read/write): 00:26:36.845 nvme0n1: ios=5786/0, merge=0/0, ticks=1244947/0, in_queue=1244947, util=96.44% 00:26:36.845 nvme10n1: ios=3129/0, merge=0/0, ticks=1243647/0, in_queue=1243647, util=96.75% 00:26:36.845 nvme1n1: ios=4407/0, merge=0/0, ticks=1211327/0, in_queue=1211327, util=97.02% 00:26:36.845 nvme2n1: ios=3461/0, merge=0/0, ticks=1218789/0, in_queue=1218789, util=97.18% 00:26:36.845 nvme3n1: ios=3702/0, merge=0/0, ticks=1254872/0, in_queue=1254872, util=97.40% 00:26:36.845 nvme4n1: ios=2713/0, merge=0/0, ticks=1233871/0, in_queue=1233871, util=97.82% 00:26:36.845 nvme5n1: ios=4296/0, merge=0/0, ticks=1260708/0, in_queue=1260708, util=98.01% 00:26:36.845 nvme6n1: ios=16302/0, merge=0/0, ticks=1251128/0, in_queue=1251128, util=98.34% 00:26:36.845 nvme7n1: ios=6454/0, merge=0/0, ticks=1253679/0, in_queue=1253679, util=98.82% 00:26:36.845 nvme8n1: ios=3168/0, merge=0/0, ticks=1246938/0, in_queue=1246938, util=99.07% 00:26:36.845 nvme9n1: ios=2606/0, merge=0/0, ticks=1245292/0, in_queue=1245292, util=99.20% 00:26:36.845 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:36.845 [global] 00:26:36.845 thread=1 00:26:36.845 invalidate=1 00:26:36.845 rw=randwrite 00:26:36.845 time_based=1 00:26:36.845 runtime=10 00:26:36.845 ioengine=libaio 00:26:36.845 direct=1 00:26:36.845 bs=262144 00:26:36.845 iodepth=64 00:26:36.845 norandommap=1 00:26:36.845 numjobs=1 00:26:36.845 00:26:36.845 [job0] 00:26:36.845 filename=/dev/nvme0n1 00:26:36.845 [job1] 00:26:36.845 filename=/dev/nvme10n1 00:26:36.845 [job2] 00:26:36.845 filename=/dev/nvme1n1 00:26:36.845 [job3] 00:26:36.845 filename=/dev/nvme2n1 00:26:36.845 [job4] 00:26:36.845 filename=/dev/nvme3n1 00:26:36.845 [job5] 00:26:36.845 filename=/dev/nvme4n1 00:26:36.845 [job6] 00:26:36.845 filename=/dev/nvme5n1 00:26:36.845 [job7] 00:26:36.845 filename=/dev/nvme6n1 00:26:36.845 [job8] 00:26:36.845 filename=/dev/nvme7n1 00:26:36.845 [job9] 00:26:36.845 filename=/dev/nvme8n1 00:26:36.845 [job10] 00:26:36.845 filename=/dev/nvme9n1 00:26:36.845 Could not set queue depth (nvme0n1) 00:26:36.845 Could not set queue depth (nvme10n1) 00:26:36.845 Could not set queue depth (nvme1n1) 00:26:36.845 Could not set queue depth (nvme2n1) 00:26:36.845 Could not set queue depth (nvme3n1) 00:26:36.845 Could not set queue depth (nvme4n1) 00:26:36.845 Could not set queue depth (nvme5n1) 00:26:36.845 Could not set queue depth (nvme6n1) 00:26:36.845 Could not set queue depth (nvme7n1) 00:26:36.845 Could not set queue depth (nvme8n1) 00:26:36.845 Could not set queue depth (nvme9n1) 00:26:36.845 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:36.845 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:36.845 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:36.845 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:36.845 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:36.845 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:36.845 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:36.845 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:36.846 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:36.846 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:36.846 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:36.846 fio-3.35 00:26:36.846 Starting 11 threads 00:26:46.847 00:26:46.847 job0: (groupid=0, jobs=1): err= 0: pid=2598706: Sat Dec 7 11:38:45 2024 00:26:46.847 write: IOPS=363, BW=90.9MiB/s (95.4MB/s)(918MiB/10094msec); 0 zone resets 00:26:46.847 slat (usec): min=27, max=19999, avg=2596.71, stdev=4785.15 00:26:46.848 clat (msec): min=19, max=332, avg=173.28, stdev=39.84 00:26:46.848 lat (msec): min=19, max=332, avg=175.88, stdev=40.10 00:26:46.848 clat percentiles (msec): 00:26:46.848 | 1.00th=[ 71], 5.00th=[ 106], 10.00th=[ 112], 20.00th=[ 140], 00:26:46.848 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:26:46.848 | 70.00th=[ 190], 80.00th=[ 207], 90.00th=[ 226], 95.00th=[ 230], 00:26:46.848 | 99.00th=[ 245], 99.50th=[ 257], 99.90th=[ 317], 99.95th=[ 326], 00:26:46.848 | 99.99th=[ 334] 00:26:46.848 bw ( KiB/s): min=70656, max=147456, per=8.75%, avg=92416.60, stdev=20837.40, samples=20 00:26:46.848 iops : min= 276, max= 576, avg=360.80, stdev=81.42, samples=20 00:26:46.848 lat (msec) : 20=0.11%, 50=0.25%, 100=2.31%, 250=96.60%, 500=0.74% 00:26:46.848 cpu : usr=0.86%, sys=0.94%, ctx=1034, majf=0, minf=1 00:26:46.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:46.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:46.848 issued rwts: total=0,3672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:46.848 job1: (groupid=0, jobs=1): err= 0: pid=2598743: Sat Dec 7 11:38:45 2024 00:26:46.848 write: IOPS=223, BW=55.8MiB/s (58.5MB/s)(570MiB/10209msec); 0 zone resets 00:26:46.848 slat (usec): min=23, max=219392, avg=4307.26, stdev=9793.69 00:26:46.848 clat (msec): min=82, max=607, avg=282.31, stdev=84.88 00:26:46.848 lat (msec): min=82, max=607, avg=286.61, stdev=85.64 00:26:46.848 clat percentiles (msec): 00:26:46.848 | 1.00th=[ 123], 5.00th=[ 161], 10.00th=[ 186], 20.00th=[ 213], 00:26:46.848 | 30.00th=[ 226], 40.00th=[ 230], 50.00th=[ 247], 60.00th=[ 334], 00:26:46.848 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 384], 95.00th=[ 409], 00:26:46.848 | 99.00th=[ 468], 99.50th=[ 531], 99.90th=[ 584], 99.95th=[ 609], 00:26:46.848 | 99.99th=[ 609] 00:26:46.848 bw ( KiB/s): min=38912, max=92857, per=5.37%, avg=56716.55, stdev=16864.64, samples=20 00:26:46.848 iops : min= 152, max= 362, avg=221.40, stdev=65.76, samples=20 00:26:46.848 lat (msec) : 100=0.18%, 250=50.61%, 500=48.42%, 750=0.79% 00:26:46.848 cpu : usr=0.59%, sys=0.65%, ctx=558, majf=0, minf=1 00:26:46.848 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:46.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:46.848 issued rwts: total=0,2278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:46.848 job2: (groupid=0, jobs=1): err= 0: pid=2598778: Sat Dec 7 11:38:45 2024 00:26:46.848 write: IOPS=296, BW=74.2MiB/s (77.8MB/s)(757MiB/10211msec); 0 zone resets 00:26:46.848 slat (usec): min=22, max=75459, avg=3183.53, stdev=6843.92 00:26:46.848 clat (msec): min=6, max=623, avg=212.46, stdev=115.42 00:26:46.848 lat (msec): min=6, max=623, avg=215.64, stdev=117.01 00:26:46.848 clat percentiles (msec): 00:26:46.848 | 1.00th=[ 66], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 110], 00:26:46.848 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 197], 60.00th=[ 230], 00:26:46.848 | 70.00th=[ 300], 80.00th=[ 355], 90.00th=[ 376], 95.00th=[ 393], 00:26:46.848 | 99.00th=[ 439], 99.50th=[ 518], 99.90th=[ 600], 99.95th=[ 625], 00:26:46.848 | 99.99th=[ 625] 00:26:46.848 bw ( KiB/s): min=38912, max=148992, per=7.19%, avg=75920.35, stdev=40572.88, samples=20 00:26:46.848 iops : min= 152, max= 582, avg=296.50, stdev=158.54, samples=20 00:26:46.848 lat (msec) : 10=0.13%, 50=0.26%, 100=3.07%, 250=64.44%, 500=31.50% 00:26:46.848 lat (msec) : 750=0.59% 00:26:46.848 cpu : usr=0.61%, sys=0.83%, ctx=877, majf=0, minf=1 00:26:46.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:46.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:46.848 issued rwts: total=0,3029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:46.848 job3: (groupid=0, jobs=1): err= 0: pid=2598791: Sat Dec 7 11:38:45 2024 00:26:46.848 write: IOPS=251, BW=62.8MiB/s (65.9MB/s)(641MiB/10210msec); 0 zone resets 00:26:46.848 slat (usec): min=26, max=66061, avg=3790.00, stdev=7471.75 00:26:46.848 clat (msec): min=19, max=605, avg=250.84, stdev=99.46 00:26:46.848 lat (msec): min=19, max=605, avg=254.63, stdev=100.73 00:26:46.848 clat percentiles (msec): 00:26:46.848 | 1.00th=[ 92], 5.00th=[ 104], 10.00th=[ 107], 20.00th=[ 176], 00:26:46.848 | 30.00th=[ 190], 40.00th=[ 220], 50.00th=[ 232], 60.00th=[ 255], 00:26:46.848 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 372], 95.00th=[ 393], 00:26:46.848 | 99.00th=[ 451], 99.50th=[ 527], 99.90th=[ 584], 99.95th=[ 609], 00:26:46.848 | 99.99th=[ 609] 00:26:46.848 bw ( KiB/s): min=38912, max=153907, per=6.07%, avg=64068.75, stdev=27821.48, samples=20 00:26:46.848 iops : min= 152, max= 601, avg=250.15, stdev=108.67, samples=20 00:26:46.848 lat (msec) : 20=0.04%, 100=3.04%, 250=56.49%, 500=39.73%, 750=0.70% 00:26:46.848 cpu : usr=0.62%, sys=0.71%, ctx=695, majf=0, minf=1 00:26:46.848 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5% 00:26:46.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:46.848 issued rwts: total=0,2565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:46.848 job4: (groupid=0, jobs=1): err= 0: pid=2598793: Sat Dec 7 11:38:45 2024 00:26:46.848 write: IOPS=834, BW=209MiB/s (219MB/s)(2103MiB/10077msec); 0 zone resets 00:26:46.848 slat (usec): min=26, max=17454, avg=1115.91, stdev=2286.51 00:26:46.848 clat (msec): min=9, max=263, avg=75.53, stdev=40.70 00:26:46.848 lat (msec): min=9, max=267, avg=76.65, stdev=41.20 00:26:46.848 clat percentiles (msec): 00:26:46.848 | 1.00th=[ 18], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 44], 00:26:46.848 | 30.00th=[ 46], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 93], 00:26:46.848 | 70.00th=[ 100], 80.00th=[ 113], 90.00th=[ 120], 95.00th=[ 133], 00:26:46.848 | 99.00th=[ 215], 99.50th=[ 222], 99.90th=[ 255], 99.95th=[ 259], 00:26:46.848 | 99.99th=[ 264] 00:26:46.848 bw ( KiB/s): min=76953, max=374272, per=20.26%, avg=213826.55, stdev=102479.96, samples=20 00:26:46.848 iops : min= 300, max= 1462, avg=835.10, stdev=400.29, samples=20 00:26:46.848 lat (msec) : 10=0.01%, 20=1.32%, 50=52.66%, 100=17.92%, 250=27.92% 00:26:46.848 lat (msec) : 500=0.18% 00:26:46.848 cpu : usr=1.93%, sys=2.68%, ctx=2312, majf=0, minf=1 00:26:46.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:46.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:46.848 issued rwts: total=0,8411,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:46.848 job5: (groupid=0, jobs=1): err= 0: pid=2598794: Sat Dec 7 11:38:45 2024 00:26:46.848 write: IOPS=622, BW=156MiB/s (163MB/s)(1569MiB/10076msec); 0 zone resets 00:26:46.848 slat (usec): min=25, max=25940, avg=1504.35, stdev=3045.06 00:26:46.848 clat (msec): min=2, max=254, avg=101.23, stdev=44.85 00:26:46.848 lat (msec): min=2, max=254, avg=102.74, stdev=45.44 00:26:46.848 clat percentiles (msec): 00:26:46.848 | 1.00th=[ 18], 5.00th=[ 67], 10.00th=[ 71], 20.00th=[ 72], 00:26:46.848 | 30.00th=[ 74], 40.00th=[ 77], 50.00th=[ 93], 60.00th=[ 104], 00:26:46.848 | 70.00th=[ 111], 80.00th=[ 115], 90.00th=[ 184], 95.00th=[ 218], 00:26:46.848 | 99.00th=[ 239], 99.50th=[ 245], 99.90th=[ 253], 99.95th=[ 253], 00:26:46.848 | 99.99th=[ 255] 00:26:46.848 bw ( KiB/s): min=69632, max=243712, per=15.07%, avg=159117.65, stdev=58033.46, samples=20 00:26:46.848 iops : min= 272, max= 952, avg=621.40, stdev=226.71, samples=20 00:26:46.848 lat (msec) : 4=0.03%, 10=0.40%, 20=0.75%, 50=1.88%, 100=55.28% 00:26:46.848 lat (msec) : 250=41.23%, 500=0.43% 00:26:46.848 cpu : usr=1.23%, sys=1.95%, ctx=1855, majf=0, minf=1 00:26:46.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:46.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:46.848 issued rwts: total=0,6275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:46.848 job6: (groupid=0, jobs=1): err= 0: pid=2598795: Sat Dec 7 11:38:45 2024 00:26:46.848 write: IOPS=228, BW=57.2MiB/s (60.0MB/s)(584MiB/10208msec); 0 zone resets 00:26:46.848 slat (usec): min=23, max=34165, avg=4206.71, stdev=7835.81 00:26:46.848 clat (msec): min=24, max=613, avg=275.44, stdev=84.38 00:26:46.848 lat (msec): min=24, max=613, avg=279.65, stdev=85.28 00:26:46.848 clat percentiles (msec): 00:26:46.848 | 1.00th=[ 97], 5.00th=[ 167], 10.00th=[ 194], 20.00th=[ 213], 00:26:46.848 | 30.00th=[ 224], 40.00th=[ 228], 50.00th=[ 232], 60.00th=[ 330], 00:26:46.848 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 380], 95.00th=[ 409], 00:26:46.848 | 99.00th=[ 460], 99.50th=[ 535], 99.90th=[ 584], 99.95th=[ 617], 00:26:46.848 | 99.99th=[ 617] 00:26:46.848 bw ( KiB/s): min=38912, max=94396, per=5.51%, avg=58176.00, stdev=16535.46, samples=20 00:26:46.848 iops : min= 152, max= 368, avg=227.10, stdev=64.48, samples=20 00:26:46.848 lat (msec) : 50=0.34%, 100=0.69%, 250=56.36%, 500=41.84%, 750=0.77% 00:26:46.848 cpu : usr=0.53%, sys=0.56%, ctx=599, majf=0, minf=1 00:26:46.848 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:46.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:46.848 issued rwts: total=0,2335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:46.848 job7: (groupid=0, jobs=1): err= 0: pid=2598796: Sat Dec 7 11:38:45 2024 00:26:46.848 write: IOPS=269, BW=67.5MiB/s (70.7MB/s)(689MiB/10209msec); 0 zone resets 00:26:46.848 slat (usec): min=22, max=69336, avg=3486.55, stdev=7328.62 00:26:46.848 clat (msec): min=7, max=613, avg=233.47, stdev=110.09 00:26:46.848 lat (msec): min=7, max=613, avg=236.95, stdev=111.69 00:26:46.848 clat percentiles (msec): 00:26:46.848 | 1.00th=[ 22], 5.00th=[ 97], 10.00th=[ 118], 20.00th=[ 132], 00:26:46.848 | 30.00th=[ 146], 40.00th=[ 176], 50.00th=[ 222], 60.00th=[ 230], 00:26:46.848 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 376], 95.00th=[ 397], 00:26:46.848 | 99.00th=[ 435], 99.50th=[ 535], 99.90th=[ 584], 99.95th=[ 617], 00:26:46.848 | 99.99th=[ 617] 00:26:46.848 bw ( KiB/s): min=38912, max=143134, per=6.53%, avg=68938.15, stdev=33525.69, samples=20 00:26:46.848 iops : min= 152, max= 559, avg=269.20, stdev=130.98, samples=20 00:26:46.848 lat (msec) : 10=0.33%, 20=0.54%, 50=1.92%, 100=2.36%, 250=59.60% 00:26:46.848 lat (msec) : 500=34.59%, 750=0.65% 00:26:46.848 cpu : usr=0.61%, sys=1.00%, ctx=835, majf=0, minf=1 00:26:46.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:46.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:46.848 issued rwts: total=0,2755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:46.848 job8: (groupid=0, jobs=1): err= 0: pid=2598797: Sat Dec 7 11:38:45 2024 00:26:46.848 write: IOPS=357, BW=89.5MiB/s (93.8MB/s)(914MiB/10210msec); 0 zone resets 00:26:46.848 slat (usec): min=25, max=42064, avg=2430.94, stdev=5169.45 00:26:46.848 clat (msec): min=15, max=620, avg=176.27, stdev=79.68 00:26:46.848 lat (msec): min=15, max=620, avg=178.70, stdev=80.63 00:26:46.848 clat percentiles (msec): 00:26:46.848 | 1.00th=[ 18], 5.00th=[ 79], 10.00th=[ 95], 20.00th=[ 101], 00:26:46.848 | 30.00th=[ 157], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 178], 00:26:46.848 | 70.00th=[ 199], 80.00th=[ 230], 90.00th=[ 241], 95.00th=[ 300], 00:26:46.848 | 99.00th=[ 435], 99.50th=[ 493], 99.90th=[ 592], 99.95th=[ 617], 00:26:46.848 | 99.99th=[ 617] 00:26:46.848 bw ( KiB/s): min=38912, max=180224, per=8.71%, avg=91980.85, stdev=36741.20, samples=20 00:26:46.848 iops : min= 152, max= 704, avg=359.10, stdev=143.55, samples=20 00:26:46.848 lat (msec) : 20=1.42%, 50=1.81%, 100=17.15%, 250=71.08%, 500=8.04% 00:26:46.848 lat (msec) : 750=0.49% 00:26:46.849 cpu : usr=0.85%, sys=0.98%, ctx=1303, majf=0, minf=1 00:26:46.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:46.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:46.849 issued rwts: total=0,3655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.849 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:46.849 job9: (groupid=0, jobs=1): err= 0: pid=2598798: Sat Dec 7 11:38:45 2024 00:26:46.849 write: IOPS=331, BW=83.0MiB/s (87.0MB/s)(847MiB/10204msec); 0 zone resets 00:26:46.849 slat (usec): min=28, max=31450, avg=2776.45, stdev=6185.77 00:26:46.849 clat (msec): min=3, max=624, avg=189.89, stdev=113.72 00:26:46.849 lat (msec): min=3, max=624, avg=192.67, stdev=115.35 00:26:46.849 clat percentiles (msec): 00:26:46.849 | 1.00th=[ 7], 5.00th=[ 104], 10.00th=[ 110], 20.00th=[ 115], 00:26:46.849 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 130], 60.00th=[ 144], 00:26:46.849 | 70.00th=[ 192], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 384], 00:26:46.849 | 99.00th=[ 418], 99.50th=[ 523], 99.90th=[ 600], 99.95th=[ 625], 00:26:46.849 | 99.99th=[ 625] 00:26:46.849 bw ( KiB/s): min=38912, max=144384, per=8.07%, avg=85146.90, stdev=41303.73, samples=20 00:26:46.849 iops : min= 152, max= 564, avg=332.50, stdev=161.35, samples=20 00:26:46.849 lat (msec) : 4=0.15%, 10=1.89%, 20=0.56%, 50=0.35%, 100=1.39% 00:26:46.849 lat (msec) : 250=68.70%, 500=26.42%, 750=0.53% 00:26:46.849 cpu : usr=0.80%, sys=0.90%, ctx=1055, majf=0, minf=1 00:26:46.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:26:46.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:46.849 issued rwts: total=0,3387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.849 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:46.849 job10: (groupid=0, jobs=1): err= 0: pid=2598799: Sat Dec 7 11:38:45 2024 00:26:46.849 write: IOPS=370, BW=92.7MiB/s (97.2MB/s)(936MiB/10095msec); 0 zone resets 00:26:46.849 slat (usec): min=27, max=27935, avg=2658.70, stdev=4888.80 00:26:46.849 clat (msec): min=22, max=327, avg=169.94, stdev=50.15 00:26:46.849 lat (msec): min=22, max=327, avg=172.60, stdev=50.70 00:26:46.849 clat percentiles (msec): 00:26:46.849 | 1.00th=[ 88], 5.00th=[ 103], 10.00th=[ 107], 20.00th=[ 112], 00:26:46.849 | 30.00th=[ 157], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 178], 00:26:46.849 | 70.00th=[ 182], 80.00th=[ 199], 90.00th=[ 234], 95.00th=[ 243], 00:26:46.849 | 99.00th=[ 317], 99.50th=[ 321], 99.90th=[ 326], 99.95th=[ 330], 00:26:46.849 | 99.99th=[ 330] 00:26:46.849 bw ( KiB/s): min=57344, max=152368, per=8.93%, avg=94213.70, stdev=26640.04, samples=20 00:26:46.849 iops : min= 224, max= 595, avg=367.85, stdev=104.10, samples=20 00:26:46.849 lat (msec) : 50=0.45%, 100=3.26%, 250=91.90%, 500=4.38% 00:26:46.849 cpu : usr=0.72%, sys=1.04%, ctx=939, majf=0, minf=1 00:26:46.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:46.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:46.849 issued rwts: total=0,3742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.849 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:46.849 00:26:46.849 Run status group 0 (all jobs): 00:26:46.849 WRITE: bw=1031MiB/s (1081MB/s), 55.8MiB/s-209MiB/s (58.5MB/s-219MB/s), io=10.3GiB (11.0GB), run=10076-10211msec 00:26:46.849 00:26:46.849 Disk stats (read/write): 00:26:46.849 nvme0n1: ios=49/7227, merge=0/0, ticks=79/1220149, in_queue=1220228, util=95.99% 00:26:46.849 nvme10n1: ios=52/4431, merge=0/0, ticks=3798/1178931, in_queue=1182729, util=100.00% 00:26:46.849 nvme1n1: ios=44/5940, merge=0/0, ticks=1858/1202629, in_queue=1204487, util=99.95% 00:26:46.849 nvme2n1: ios=0/5004, merge=0/0, ticks=0/1201286, in_queue=1201286, util=97.09% 00:26:46.849 nvme3n1: ios=0/16702, merge=0/0, ticks=0/1219047, in_queue=1219047, util=97.25% 00:26:46.849 nvme4n1: ios=0/12430, merge=0/0, ticks=0/1220633, in_queue=1220633, util=97.70% 00:26:46.849 nvme5n1: ios=0/4548, merge=0/0, ticks=0/1201065, in_queue=1201065, util=97.92% 00:26:46.849 nvme6n1: ios=42/5388, merge=0/0, ticks=3577/1199171, in_queue=1202748, util=100.00% 00:26:46.849 nvme7n1: ios=0/7189, merge=0/0, ticks=0/1204603, in_queue=1204603, util=98.63% 00:26:46.849 nvme8n1: ios=37/6655, merge=0/0, ticks=1204/1202615, in_queue=1203819, util=99.87% 00:26:46.849 nvme9n1: ios=0/7363, merge=0/0, ticks=0/1218141, in_queue=1218141, util=99.08% 00:26:46.849 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:46.849 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:46.849 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.849 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:47.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:47.108 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:47.108 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:47.108 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:47.109 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:47.109 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:47.109 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:47.109 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:47.109 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:47.109 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.109 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.109 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.109 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.109 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:47.679 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:47.679 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:47.679 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:47.679 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:47.679 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:47.679 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:47.680 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:47.680 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:47.680 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:47.680 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.680 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.680 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.680 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.680 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:48.252 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:48.252 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:48.252 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:48.252 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:48.252 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:48.252 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:48.252 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:48.252 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:48.252 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:48.252 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.252 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.252 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.252 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.252 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:48.826 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:48.827 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:48.827 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:48.827 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:48.827 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:48.827 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:48.827 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:48.827 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:48.827 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:48.827 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.827 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.827 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.827 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.827 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:49.088 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:49.088 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:49.088 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:49.088 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:49.088 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:49.349 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:49.349 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:49.349 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:49.349 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:49.349 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.349 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:49.349 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.349 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.349 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:49.609 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:49.610 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:49.610 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:49.610 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:49.610 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:49.610 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:49.610 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:49.610 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:49.610 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:49.610 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.610 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:49.610 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.610 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.610 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:49.870 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:49.870 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:49.870 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:49.870 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:49.870 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:49.871 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:49.871 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:50.135 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:50.135 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:50.135 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.135 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.135 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.135 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.135 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:50.135 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:50.135 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:50.135 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:50.135 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:50.135 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:50.135 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:50.135 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:50.397 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.397 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.657 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.657 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.657 11:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:50.917 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:50.917 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:50.917 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:50.917 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:50.917 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:50.917 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:50.917 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:50.917 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:50.917 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:50.917 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.917 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.917 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.917 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.917 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:51.179 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:51.179 rmmod nvme_tcp 00:26:51.179 rmmod nvme_fabrics 00:26:51.179 rmmod nvme_keyring 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 2588272 ']' 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 2588272 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 2588272 ']' 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 2588272 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2588272 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2588272' 00:26:51.179 killing process with pid 2588272 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 2588272 00:26:51.179 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 2588272 00:26:53.719 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:53.719 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:53.719 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:53.719 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:53.719 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:53.719 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:53.719 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:53.719 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:53.719 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:53.719 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.719 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.719 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:55.627 00:26:55.627 real 1m21.962s 00:26:55.627 user 5m14.842s 00:26:55.627 sys 0m16.440s 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.627 ************************************ 00:26:55.627 END TEST nvmf_multiconnection 00:26:55.627 ************************************ 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:55.627 ************************************ 00:26:55.627 START TEST nvmf_initiator_timeout 00:26:55.627 ************************************ 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:55.627 * Looking for test storage... 00:26:55.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:55.627 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.628 --rc genhtml_branch_coverage=1 00:26:55.628 --rc genhtml_function_coverage=1 00:26:55.628 --rc genhtml_legend=1 00:26:55.628 --rc geninfo_all_blocks=1 00:26:55.628 --rc geninfo_unexecuted_blocks=1 00:26:55.628 00:26:55.628 ' 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.628 --rc genhtml_branch_coverage=1 00:26:55.628 --rc genhtml_function_coverage=1 00:26:55.628 --rc genhtml_legend=1 00:26:55.628 --rc geninfo_all_blocks=1 00:26:55.628 --rc geninfo_unexecuted_blocks=1 00:26:55.628 00:26:55.628 ' 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.628 --rc genhtml_branch_coverage=1 00:26:55.628 --rc genhtml_function_coverage=1 00:26:55.628 --rc genhtml_legend=1 00:26:55.628 --rc geninfo_all_blocks=1 00:26:55.628 --rc geninfo_unexecuted_blocks=1 00:26:55.628 00:26:55.628 ' 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.628 --rc genhtml_branch_coverage=1 00:26:55.628 --rc genhtml_function_coverage=1 00:26:55.628 --rc genhtml_legend=1 00:26:55.628 --rc geninfo_all_blocks=1 00:26:55.628 --rc geninfo_unexecuted_blocks=1 00:26:55.628 00:26:55.628 ' 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.628 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.892 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:55.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:55.892 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.224 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:04.225 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:04.225 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:04.225 Found net devices under 0000:31:00.0: cvl_0_0 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:04.225 Found net devices under 0000:31:00.1: cvl_0_1 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:04.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:27:04.225 00:27:04.225 --- 10.0.0.2 ping statistics --- 00:27:04.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.225 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:27:04.225 00:27:04.225 --- 10.0.0.1 ping statistics --- 00:27:04.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.225 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:04.225 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:04.226 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:04.226 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.226 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=2605750 00:27:04.226 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 2605750 00:27:04.226 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:04.226 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 2605750 ']' 00:27:04.226 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.226 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.226 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.226 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.226 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.226 [2024-12-07 11:39:02.611770] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:27:04.226 [2024-12-07 11:39:02.611905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.226 [2024-12-07 11:39:02.763526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:04.226 [2024-12-07 11:39:02.866792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.226 [2024-12-07 11:39:02.866835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.226 [2024-12-07 11:39:02.866847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.226 [2024-12-07 11:39:02.866859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.226 [2024-12-07 11:39:02.866868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.226 [2024-12-07 11:39:02.869082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.226 [2024-12-07 11:39:02.869133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:04.226 [2024-12-07 11:39:02.869319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.226 [2024-12-07 11:39:02.869340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.226 Malloc0 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.226 Delay0 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.226 [2024-12-07 11:39:03.505866] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.226 [2024-12-07 11:39:03.546191] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.226 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:06.140 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:06.140 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:06.140 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:06.140 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:06.140 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:08.085 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:08.085 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:08.085 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:08.085 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:08.085 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:08.085 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:08.085 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2606629 00:27:08.085 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:08.085 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:08.085 [global] 00:27:08.085 thread=1 00:27:08.085 invalidate=1 00:27:08.085 rw=write 00:27:08.085 time_based=1 00:27:08.085 runtime=60 00:27:08.085 ioengine=libaio 00:27:08.085 direct=1 00:27:08.085 bs=4096 00:27:08.085 iodepth=1 00:27:08.085 norandommap=0 00:27:08.085 numjobs=1 00:27:08.085 00:27:08.085 verify_dump=1 00:27:08.085 verify_backlog=512 00:27:08.085 verify_state_save=0 00:27:08.085 do_verify=1 00:27:08.085 verify=crc32c-intel 00:27:08.085 [job0] 00:27:08.085 filename=/dev/nvme0n1 00:27:08.085 Could not set queue depth (nvme0n1) 00:27:08.345 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:08.346 fio-3.35 00:27:08.346 Starting 1 thread 00:27:10.889 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:10.889 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.889 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.889 true 00:27:10.889 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.889 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:10.889 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.889 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.889 true 00:27:10.889 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.889 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:10.889 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.889 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.890 true 00:27:10.890 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.890 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:10.890 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.890 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.890 true 00:27:10.890 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.890 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.191 true 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.191 true 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.191 true 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.191 true 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:14.191 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2606629 00:28:10.454 00:28:10.454 job0: (groupid=0, jobs=1): err= 0: pid=2606801: Sat Dec 7 11:40:07 2024 00:28:10.454 read: IOPS=47, BW=188KiB/s (193kB/s)(11.0MiB/60039msec) 00:28:10.454 slat (usec): min=7, max=2773, avg=27.09, stdev=52.08 00:28:10.454 clat (usec): min=390, max=41793k, avg=20650.40, stdev=786304.14 00:28:10.454 lat (usec): min=398, max=41793k, avg=20677.49, stdev=786304.20 00:28:10.454 clat percentiles (usec): 00:28:10.454 | 1.00th=[ 627], 5.00th=[ 725], 10.00th=[ 775], 00:28:10.454 | 20.00th=[ 816], 30.00th=[ 840], 40.00th=[ 865], 00:28:10.454 | 50.00th=[ 889], 60.00th=[ 914], 70.00th=[ 1012], 00:28:10.454 | 80.00th=[ 1106], 90.00th=[ 41157], 95.00th=[ 41157], 00:28:10.454 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:28:10.454 | 99.95th=[ 42206], 99.99th=[17112761] 00:28:10.454 write: IOPS=51, BW=205KiB/s (210kB/s)(12.0MiB/60039msec); 0 zone resets 00:28:10.454 slat (usec): min=9, max=32071, avg=39.72, stdev=578.21 00:28:10.454 clat (usec): min=175, max=927, avg=474.28, stdev=120.51 00:28:10.454 lat (usec): min=187, max=32694, avg=514.00, stdev=593.95 00:28:10.454 clat percentiles (usec): 00:28:10.454 | 1.00th=[ 235], 5.00th=[ 318], 10.00th=[ 338], 20.00th=[ 371], 00:28:10.454 | 30.00th=[ 420], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 465], 00:28:10.454 | 70.00th=[ 498], 80.00th=[ 586], 90.00th=[ 676], 95.00th=[ 701], 00:28:10.454 | 99.00th=[ 766], 99.50th=[ 799], 99.90th=[ 873], 99.95th=[ 889], 00:28:10.454 | 99.99th=[ 930] 00:28:10.454 bw ( KiB/s): min= 184, max= 4096, per=100.00%, avg=3072.00, stdev=1480.98, samples=8 00:28:10.454 iops : min= 46, max= 1024, avg=768.00, stdev=370.24, samples=8 00:28:10.454 lat (usec) : 250=0.95%, 500=35.80%, 750=18.13%, 1000=30.49% 00:28:10.454 lat (msec) : 2=8.72%, 50=5.90%, >=2000=0.02% 00:28:10.454 cpu : usr=0.14%, sys=0.29%, ctx=5901, majf=0, minf=1 00:28:10.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.454 issued rwts: total=2825,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:10.454 00:28:10.454 Run status group 0 (all jobs): 00:28:10.454 READ: bw=188KiB/s (193kB/s), 188KiB/s-188KiB/s (193kB/s-193kB/s), io=11.0MiB (11.6MB), run=60039-60039msec 00:28:10.454 WRITE: bw=205KiB/s (210kB/s), 205KiB/s-205KiB/s (210kB/s-210kB/s), io=12.0MiB (12.6MB), run=60039-60039msec 00:28:10.454 00:28:10.454 Disk stats (read/write): 00:28:10.454 nvme0n1: ios=2873/3072, merge=0/0, ticks=17656/1411, in_queue=19067, util=99.69% 00:28:10.454 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:10.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:10.454 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:10.454 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:10.454 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:10.454 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:10.454 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:10.454 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:10.454 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:10.455 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:10.455 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:10.455 nvmf hotplug test: fio successful as expected 00:28:10.455 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:10.455 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.455 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.455 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.455 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:10.455 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:10.455 rmmod nvme_tcp 00:28:10.455 rmmod nvme_fabrics 00:28:10.455 rmmod nvme_keyring 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 2605750 ']' 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 2605750 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 2605750 ']' 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 2605750 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2605750 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2605750' 00:28:10.455 killing process with pid 2605750 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 2605750 00:28:10.455 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 2605750 00:28:10.455 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:10.455 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:10.455 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:10.455 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:10.455 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:10.455 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:10.455 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:10.455 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:10.455 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:10.455 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.455 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.455 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.837 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:11.837 00:28:11.837 real 1m16.321s 00:28:11.837 user 4m38.782s 00:28:11.837 sys 0m7.828s 00:28:11.837 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:11.837 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:11.837 ************************************ 00:28:11.837 END TEST nvmf_initiator_timeout 00:28:11.837 ************************************ 00:28:11.837 11:40:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:11.837 11:40:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:11.837 11:40:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:11.837 11:40:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:11.837 11:40:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:19.977 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:19.977 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:19.977 Found net devices under 0000:31:00.0: cvl_0_0 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:19.977 Found net devices under 0000:31:00.1: cvl_0_1 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:19.977 ************************************ 00:28:19.977 START TEST nvmf_perf_adq 00:28:19.977 ************************************ 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:19.977 * Looking for test storage... 00:28:19.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:19.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.977 --rc genhtml_branch_coverage=1 00:28:19.977 --rc genhtml_function_coverage=1 00:28:19.977 --rc genhtml_legend=1 00:28:19.977 --rc geninfo_all_blocks=1 00:28:19.977 --rc geninfo_unexecuted_blocks=1 00:28:19.977 00:28:19.977 ' 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:19.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.977 --rc genhtml_branch_coverage=1 00:28:19.977 --rc genhtml_function_coverage=1 00:28:19.977 --rc genhtml_legend=1 00:28:19.977 --rc geninfo_all_blocks=1 00:28:19.977 --rc geninfo_unexecuted_blocks=1 00:28:19.977 00:28:19.977 ' 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:19.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.977 --rc genhtml_branch_coverage=1 00:28:19.977 --rc genhtml_function_coverage=1 00:28:19.977 --rc genhtml_legend=1 00:28:19.977 --rc geninfo_all_blocks=1 00:28:19.977 --rc geninfo_unexecuted_blocks=1 00:28:19.977 00:28:19.977 ' 00:28:19.977 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:19.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.977 --rc genhtml_branch_coverage=1 00:28:19.977 --rc genhtml_function_coverage=1 00:28:19.977 --rc genhtml_legend=1 00:28:19.977 --rc geninfo_all_blocks=1 00:28:19.977 --rc geninfo_unexecuted_blocks=1 00:28:19.977 00:28:19.977 ' 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:19.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:19.978 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:26.564 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:26.564 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.564 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:26.565 Found net devices under 0000:31:00.0: cvl_0_0 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:26.565 Found net devices under 0000:31:00.1: cvl_0_1 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:26.565 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:27.515 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:30.052 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:35.333 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:35.334 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:35.334 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:35.334 Found net devices under 0000:31:00.0: cvl_0_0 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:35.334 Found net devices under 0000:31:00.1: cvl_0_1 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:35.334 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:35.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:28:35.334 00:28:35.334 --- 10.0.0.2 ping statistics --- 00:28:35.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.334 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:28:35.334 00:28:35.334 --- 10.0.0.1 ping statistics --- 00:28:35.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.334 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2628533 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2628533 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2628533 ']' 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.334 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.334 [2024-12-07 11:40:34.247569] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:28:35.334 [2024-12-07 11:40:34.247705] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.334 [2024-12-07 11:40:34.397880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:35.334 [2024-12-07 11:40:34.500387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.334 [2024-12-07 11:40:34.500431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.334 [2024-12-07 11:40:34.500443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:35.334 [2024-12-07 11:40:34.500454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:35.334 [2024-12-07 11:40:34.500463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.334 [2024-12-07 11:40:34.502923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.334 [2024-12-07 11:40:34.503014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.335 [2024-12-07 11:40:34.503152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.335 [2024-12-07 11:40:34.503175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.915 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.176 [2024-12-07 11:40:35.374243] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.176 Malloc1 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.176 [2024-12-07 11:40:35.487640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.176 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2628721 00:28:36.177 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:36.177 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:38.715 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:38.715 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.715 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.715 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.715 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:38.715 "tick_rate": 2400000000, 00:28:38.715 "poll_groups": [ 00:28:38.715 { 00:28:38.715 "name": "nvmf_tgt_poll_group_000", 00:28:38.715 "admin_qpairs": 1, 00:28:38.715 "io_qpairs": 1, 00:28:38.715 "current_admin_qpairs": 1, 00:28:38.715 "current_io_qpairs": 1, 00:28:38.715 "pending_bdev_io": 0, 00:28:38.715 "completed_nvme_io": 20399, 00:28:38.715 "transports": [ 00:28:38.715 { 00:28:38.715 "trtype": "TCP" 00:28:38.715 } 00:28:38.715 ] 00:28:38.715 }, 00:28:38.715 { 00:28:38.715 "name": "nvmf_tgt_poll_group_001", 00:28:38.715 "admin_qpairs": 0, 00:28:38.715 "io_qpairs": 1, 00:28:38.715 "current_admin_qpairs": 0, 00:28:38.715 "current_io_qpairs": 1, 00:28:38.715 "pending_bdev_io": 0, 00:28:38.715 "completed_nvme_io": 26960, 00:28:38.715 "transports": [ 00:28:38.715 { 00:28:38.715 "trtype": "TCP" 00:28:38.715 } 00:28:38.715 ] 00:28:38.715 }, 00:28:38.715 { 00:28:38.715 "name": "nvmf_tgt_poll_group_002", 00:28:38.715 "admin_qpairs": 0, 00:28:38.715 "io_qpairs": 1, 00:28:38.715 "current_admin_qpairs": 0, 00:28:38.715 "current_io_qpairs": 1, 00:28:38.715 "pending_bdev_io": 0, 00:28:38.715 "completed_nvme_io": 22575, 00:28:38.715 "transports": [ 00:28:38.715 { 00:28:38.715 "trtype": "TCP" 00:28:38.715 } 00:28:38.715 ] 00:28:38.715 }, 00:28:38.715 { 00:28:38.715 "name": "nvmf_tgt_poll_group_003", 00:28:38.715 "admin_qpairs": 0, 00:28:38.715 "io_qpairs": 1, 00:28:38.715 "current_admin_qpairs": 0, 00:28:38.715 "current_io_qpairs": 1, 00:28:38.715 "pending_bdev_io": 0, 00:28:38.715 "completed_nvme_io": 19904, 00:28:38.716 "transports": [ 00:28:38.716 { 00:28:38.716 "trtype": "TCP" 00:28:38.716 } 00:28:38.716 ] 00:28:38.716 } 00:28:38.716 ] 00:28:38.716 }' 00:28:38.716 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:38.716 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:38.716 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:38.716 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:38.716 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2628721 00:28:46.850 Initializing NVMe Controllers 00:28:46.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:46.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:46.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:46.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:46.850 Initialization complete. Launching workers. 00:28:46.850 ======================================================== 00:28:46.850 Latency(us) 00:28:46.850 Device Information : IOPS MiB/s Average min max 00:28:46.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13401.96 52.35 4775.64 1574.76 8642.16 00:28:46.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14730.45 57.54 4344.31 1234.96 9575.95 00:28:46.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14004.36 54.70 4570.08 1480.79 11126.04 00:28:46.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11107.36 43.39 5761.26 1735.96 11392.18 00:28:46.850 ======================================================== 00:28:46.850 Total : 53244.13 207.98 4807.85 1234.96 11392.18 00:28:46.850 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.850 rmmod nvme_tcp 00:28:46.850 rmmod nvme_fabrics 00:28:46.850 rmmod nvme_keyring 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2628533 ']' 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2628533 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2628533 ']' 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2628533 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2628533 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2628533' 00:28:46.850 killing process with pid 2628533 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2628533 00:28:46.850 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2628533 00:28:47.420 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:47.420 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:47.420 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:47.420 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:47.420 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:47.420 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:47.420 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:47.420 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:47.420 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:47.420 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.420 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.420 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.959 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.959 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:49.959 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:49.959 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:51.341 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:53.261 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.550 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:58.551 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:58.551 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:58.551 Found net devices under 0000:31:00.0: cvl_0_0 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:58.551 Found net devices under 0000:31:00.1: cvl_0_1 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:28:58.551 00:28:58.551 --- 10.0.0.2 ping statistics --- 00:28:58.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.551 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:28:58.551 00:28:58.551 --- 10.0.0.1 ping statistics --- 00:28:58.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.551 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:58.551 net.core.busy_poll = 1 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:58.551 net.core.busy_read = 1 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:58.551 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2633506 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2633506 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2633506 ']' 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.810 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.811 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:58.811 [2024-12-07 11:40:58.077671] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:28:58.811 [2024-12-07 11:40:58.077801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.070 [2024-12-07 11:40:58.230943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.070 [2024-12-07 11:40:58.332154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.070 [2024-12-07 11:40:58.332197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.070 [2024-12-07 11:40:58.332209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.070 [2024-12-07 11:40:58.332220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.070 [2024-12-07 11:40:58.332229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.070 [2024-12-07 11:40:58.334471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.070 [2024-12-07 11:40:58.334553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.070 [2024-12-07 11:40:58.334670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.070 [2024-12-07 11:40:58.334693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.639 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.901 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.901 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:59.901 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.901 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.901 [2024-12-07 11:40:59.213849] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.901 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.901 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:59.901 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.901 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.163 Malloc1 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.163 [2024-12-07 11:40:59.333634] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2633838 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:29:00.163 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:02.075 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:29:02.075 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.075 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:02.075 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.075 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:29:02.075 "tick_rate": 2400000000, 00:29:02.075 "poll_groups": [ 00:29:02.075 { 00:29:02.075 "name": "nvmf_tgt_poll_group_000", 00:29:02.075 "admin_qpairs": 1, 00:29:02.075 "io_qpairs": 2, 00:29:02.075 "current_admin_qpairs": 1, 00:29:02.075 "current_io_qpairs": 2, 00:29:02.075 "pending_bdev_io": 0, 00:29:02.075 "completed_nvme_io": 25620, 00:29:02.075 "transports": [ 00:29:02.075 { 00:29:02.075 "trtype": "TCP" 00:29:02.075 } 00:29:02.075 ] 00:29:02.075 }, 00:29:02.075 { 00:29:02.075 "name": "nvmf_tgt_poll_group_001", 00:29:02.075 "admin_qpairs": 0, 00:29:02.075 "io_qpairs": 2, 00:29:02.075 "current_admin_qpairs": 0, 00:29:02.075 "current_io_qpairs": 2, 00:29:02.075 "pending_bdev_io": 0, 00:29:02.075 "completed_nvme_io": 33971, 00:29:02.075 "transports": [ 00:29:02.075 { 00:29:02.075 "trtype": "TCP" 00:29:02.075 } 00:29:02.075 ] 00:29:02.075 }, 00:29:02.075 { 00:29:02.075 "name": "nvmf_tgt_poll_group_002", 00:29:02.075 "admin_qpairs": 0, 00:29:02.075 "io_qpairs": 0, 00:29:02.075 "current_admin_qpairs": 0, 00:29:02.075 "current_io_qpairs": 0, 00:29:02.075 "pending_bdev_io": 0, 00:29:02.075 "completed_nvme_io": 0, 00:29:02.075 "transports": [ 00:29:02.075 { 00:29:02.075 "trtype": "TCP" 00:29:02.075 } 00:29:02.075 ] 00:29:02.075 }, 00:29:02.075 { 00:29:02.075 "name": "nvmf_tgt_poll_group_003", 00:29:02.075 "admin_qpairs": 0, 00:29:02.075 "io_qpairs": 0, 00:29:02.075 "current_admin_qpairs": 0, 00:29:02.075 "current_io_qpairs": 0, 00:29:02.075 "pending_bdev_io": 0, 00:29:02.075 "completed_nvme_io": 0, 00:29:02.075 "transports": [ 00:29:02.075 { 00:29:02.075 "trtype": "TCP" 00:29:02.075 } 00:29:02.075 ] 00:29:02.075 } 00:29:02.075 ] 00:29:02.075 }' 00:29:02.075 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:02.075 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:29:02.075 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:29:02.075 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:29:02.075 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2633838 00:29:12.071 Initializing NVMe Controllers 00:29:12.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:12.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:12.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:12.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:12.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:12.071 Initialization complete. Launching workers. 00:29:12.071 ======================================================== 00:29:12.071 Latency(us) 00:29:12.071 Device Information : IOPS MiB/s Average min max 00:29:12.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10585.20 41.35 6046.39 1209.25 52969.75 00:29:12.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8488.10 33.16 7556.26 1157.57 51320.60 00:29:12.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10515.00 41.07 6098.12 1072.97 50860.03 00:29:12.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7425.30 29.01 8619.02 1536.94 53786.99 00:29:12.071 ======================================================== 00:29:12.071 Total : 37013.60 144.58 6923.43 1072.97 53786.99 00:29:12.071 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:12.071 rmmod nvme_tcp 00:29:12.071 rmmod nvme_fabrics 00:29:12.071 rmmod nvme_keyring 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2633506 ']' 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2633506 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2633506 ']' 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2633506 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2633506 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2633506' 00:29:12.071 killing process with pid 2633506 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2633506 00:29:12.071 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2633506 00:29:12.071 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:12.071 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:12.071 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:12.071 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:12.071 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:12.071 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:12.071 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:12.071 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:12.071 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:12.071 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.071 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.071 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.471 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:13.471 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:13.471 00:29:13.471 real 0m54.510s 00:29:13.471 user 2m54.428s 00:29:13.471 sys 0m12.193s 00:29:13.471 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:13.471 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:13.471 ************************************ 00:29:13.471 END TEST nvmf_perf_adq 00:29:13.471 ************************************ 00:29:13.471 11:41:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:13.471 11:41:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:13.471 11:41:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:13.471 11:41:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:13.471 ************************************ 00:29:13.471 START TEST nvmf_shutdown 00:29:13.471 ************************************ 00:29:13.471 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:13.733 * Looking for test storage... 00:29:13.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:13.733 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:13.733 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:13.733 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:13.733 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:13.733 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:13.733 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:13.733 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:13.733 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:13.733 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:13.733 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:13.733 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:13.733 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:13.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.734 --rc genhtml_branch_coverage=1 00:29:13.734 --rc genhtml_function_coverage=1 00:29:13.734 --rc genhtml_legend=1 00:29:13.734 --rc geninfo_all_blocks=1 00:29:13.734 --rc geninfo_unexecuted_blocks=1 00:29:13.734 00:29:13.734 ' 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:13.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.734 --rc genhtml_branch_coverage=1 00:29:13.734 --rc genhtml_function_coverage=1 00:29:13.734 --rc genhtml_legend=1 00:29:13.734 --rc geninfo_all_blocks=1 00:29:13.734 --rc geninfo_unexecuted_blocks=1 00:29:13.734 00:29:13.734 ' 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:13.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.734 --rc genhtml_branch_coverage=1 00:29:13.734 --rc genhtml_function_coverage=1 00:29:13.734 --rc genhtml_legend=1 00:29:13.734 --rc geninfo_all_blocks=1 00:29:13.734 --rc geninfo_unexecuted_blocks=1 00:29:13.734 00:29:13.734 ' 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:13.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.734 --rc genhtml_branch_coverage=1 00:29:13.734 --rc genhtml_function_coverage=1 00:29:13.734 --rc genhtml_legend=1 00:29:13.734 --rc geninfo_all_blocks=1 00:29:13.734 --rc geninfo_unexecuted_blocks=1 00:29:13.734 00:29:13.734 ' 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:13.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:13.734 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:13.734 ************************************ 00:29:13.734 START TEST nvmf_shutdown_tc1 00:29:13.734 ************************************ 00:29:13.734 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:13.734 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:13.734 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:13.734 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:13.734 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.735 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:13.735 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:13.735 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:13.735 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.735 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.735 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.735 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:13.735 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:13.735 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:13.735 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:21.878 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:21.878 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:21.878 Found net devices under 0000:31:00.0: cvl_0_0 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:21.878 Found net devices under 0000:31:00.1: cvl_0_1 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.878 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:21.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:29:21.879 00:29:21.879 --- 10.0.0.2 ping statistics --- 00:29:21.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.879 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:29:21.879 00:29:21.879 --- 10.0.0.1 ping statistics --- 00:29:21.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.879 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2640145 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2640145 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2640145 ']' 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.879 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.879 [2024-12-07 11:41:20.621047] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:21.879 [2024-12-07 11:41:20.621172] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.879 [2024-12-07 11:41:20.791736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:21.879 [2024-12-07 11:41:20.919638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.879 [2024-12-07 11:41:20.919710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.879 [2024-12-07 11:41:20.919724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.879 [2024-12-07 11:41:20.919738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.879 [2024-12-07 11:41:20.919749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.879 [2024-12-07 11:41:20.922673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.879 [2024-12-07 11:41:20.922827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.879 [2024-12-07 11:41:20.922935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.879 [2024-12-07 11:41:20.922962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.140 [2024-12-07 11:41:21.442892] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.140 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.402 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.402 Malloc1 00:29:22.402 [2024-12-07 11:41:21.606749] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.402 Malloc2 00:29:22.402 Malloc3 00:29:22.663 Malloc4 00:29:22.663 Malloc5 00:29:22.663 Malloc6 00:29:22.925 Malloc7 00:29:22.925 Malloc8 00:29:22.925 Malloc9 00:29:23.186 Malloc10 00:29:23.186 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.186 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:23.186 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:23.186 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:23.186 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2640540 00:29:23.186 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2640540 /var/tmp/bdevperf.sock 00:29:23.186 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2640540 ']' 00:29:23.186 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:23.186 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.186 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:23.186 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:23.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:23.186 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.187 { 00:29:23.187 "params": { 00:29:23.187 "name": "Nvme$subsystem", 00:29:23.187 "trtype": "$TEST_TRANSPORT", 00:29:23.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.187 "adrfam": "ipv4", 00:29:23.187 "trsvcid": "$NVMF_PORT", 00:29:23.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.187 "hdgst": ${hdgst:-false}, 00:29:23.187 "ddgst": ${ddgst:-false} 00:29:23.187 }, 00:29:23.187 "method": "bdev_nvme_attach_controller" 00:29:23.187 } 00:29:23.187 EOF 00:29:23.187 )") 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.187 { 00:29:23.187 "params": { 00:29:23.187 "name": "Nvme$subsystem", 00:29:23.187 "trtype": "$TEST_TRANSPORT", 00:29:23.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.187 "adrfam": "ipv4", 00:29:23.187 "trsvcid": "$NVMF_PORT", 00:29:23.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.187 "hdgst": ${hdgst:-false}, 00:29:23.187 "ddgst": ${ddgst:-false} 00:29:23.187 }, 00:29:23.187 "method": "bdev_nvme_attach_controller" 00:29:23.187 } 00:29:23.187 EOF 00:29:23.187 )") 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.187 { 00:29:23.187 "params": { 00:29:23.187 "name": "Nvme$subsystem", 00:29:23.187 "trtype": "$TEST_TRANSPORT", 00:29:23.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.187 "adrfam": "ipv4", 00:29:23.187 "trsvcid": "$NVMF_PORT", 00:29:23.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.187 "hdgst": ${hdgst:-false}, 00:29:23.187 "ddgst": ${ddgst:-false} 00:29:23.187 }, 00:29:23.187 "method": "bdev_nvme_attach_controller" 00:29:23.187 } 00:29:23.187 EOF 00:29:23.187 )") 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.187 { 00:29:23.187 "params": { 00:29:23.187 "name": "Nvme$subsystem", 00:29:23.187 "trtype": "$TEST_TRANSPORT", 00:29:23.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.187 "adrfam": "ipv4", 00:29:23.187 "trsvcid": "$NVMF_PORT", 00:29:23.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.187 "hdgst": ${hdgst:-false}, 00:29:23.187 "ddgst": ${ddgst:-false} 00:29:23.187 }, 00:29:23.187 "method": "bdev_nvme_attach_controller" 00:29:23.187 } 00:29:23.187 EOF 00:29:23.187 )") 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.187 { 00:29:23.187 "params": { 00:29:23.187 "name": "Nvme$subsystem", 00:29:23.187 "trtype": "$TEST_TRANSPORT", 00:29:23.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.187 "adrfam": "ipv4", 00:29:23.187 "trsvcid": "$NVMF_PORT", 00:29:23.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.187 "hdgst": ${hdgst:-false}, 00:29:23.187 "ddgst": ${ddgst:-false} 00:29:23.187 }, 00:29:23.187 "method": "bdev_nvme_attach_controller" 00:29:23.187 } 00:29:23.187 EOF 00:29:23.187 )") 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.187 { 00:29:23.187 "params": { 00:29:23.187 "name": "Nvme$subsystem", 00:29:23.187 "trtype": "$TEST_TRANSPORT", 00:29:23.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.187 "adrfam": "ipv4", 00:29:23.187 "trsvcid": "$NVMF_PORT", 00:29:23.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.187 "hdgst": ${hdgst:-false}, 00:29:23.187 "ddgst": ${ddgst:-false} 00:29:23.187 }, 00:29:23.187 "method": "bdev_nvme_attach_controller" 00:29:23.187 } 00:29:23.187 EOF 00:29:23.187 )") 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.187 { 00:29:23.187 "params": { 00:29:23.187 "name": "Nvme$subsystem", 00:29:23.187 "trtype": "$TEST_TRANSPORT", 00:29:23.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.187 "adrfam": "ipv4", 00:29:23.187 "trsvcid": "$NVMF_PORT", 00:29:23.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.187 "hdgst": ${hdgst:-false}, 00:29:23.187 "ddgst": ${ddgst:-false} 00:29:23.187 }, 00:29:23.187 "method": "bdev_nvme_attach_controller" 00:29:23.187 } 00:29:23.187 EOF 00:29:23.187 )") 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.187 { 00:29:23.187 "params": { 00:29:23.187 "name": "Nvme$subsystem", 00:29:23.187 "trtype": "$TEST_TRANSPORT", 00:29:23.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.187 "adrfam": "ipv4", 00:29:23.187 "trsvcid": "$NVMF_PORT", 00:29:23.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.187 "hdgst": ${hdgst:-false}, 00:29:23.187 "ddgst": ${ddgst:-false} 00:29:23.187 }, 00:29:23.187 "method": "bdev_nvme_attach_controller" 00:29:23.187 } 00:29:23.187 EOF 00:29:23.187 )") 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.187 { 00:29:23.187 "params": { 00:29:23.187 "name": "Nvme$subsystem", 00:29:23.187 "trtype": "$TEST_TRANSPORT", 00:29:23.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.187 "adrfam": "ipv4", 00:29:23.187 "trsvcid": "$NVMF_PORT", 00:29:23.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.187 "hdgst": ${hdgst:-false}, 00:29:23.187 "ddgst": ${ddgst:-false} 00:29:23.187 }, 00:29:23.187 "method": "bdev_nvme_attach_controller" 00:29:23.187 } 00:29:23.187 EOF 00:29:23.187 )") 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.187 { 00:29:23.187 "params": { 00:29:23.187 "name": "Nvme$subsystem", 00:29:23.187 "trtype": "$TEST_TRANSPORT", 00:29:23.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.187 "adrfam": "ipv4", 00:29:23.187 "trsvcid": "$NVMF_PORT", 00:29:23.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.187 "hdgst": ${hdgst:-false}, 00:29:23.187 "ddgst": ${ddgst:-false} 00:29:23.187 }, 00:29:23.187 "method": "bdev_nvme_attach_controller" 00:29:23.187 } 00:29:23.187 EOF 00:29:23.187 )") 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:23.187 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:23.187 [2024-12-07 11:41:22.449172] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:23.187 [2024-12-07 11:41:22.449286] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:23.188 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:23.188 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:23.188 "params": { 00:29:23.188 "name": "Nvme1", 00:29:23.188 "trtype": "tcp", 00:29:23.188 "traddr": "10.0.0.2", 00:29:23.188 "adrfam": "ipv4", 00:29:23.188 "trsvcid": "4420", 00:29:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:23.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:23.188 "hdgst": false, 00:29:23.188 "ddgst": false 00:29:23.188 }, 00:29:23.188 "method": "bdev_nvme_attach_controller" 00:29:23.188 },{ 00:29:23.188 "params": { 00:29:23.188 "name": "Nvme2", 00:29:23.188 "trtype": "tcp", 00:29:23.188 "traddr": "10.0.0.2", 00:29:23.188 "adrfam": "ipv4", 00:29:23.188 "trsvcid": "4420", 00:29:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:23.188 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:23.188 "hdgst": false, 00:29:23.188 "ddgst": false 00:29:23.188 }, 00:29:23.188 "method": "bdev_nvme_attach_controller" 00:29:23.188 },{ 00:29:23.188 "params": { 00:29:23.188 "name": "Nvme3", 00:29:23.188 "trtype": "tcp", 00:29:23.188 "traddr": "10.0.0.2", 00:29:23.188 "adrfam": "ipv4", 00:29:23.188 "trsvcid": "4420", 00:29:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:23.188 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:23.188 "hdgst": false, 00:29:23.188 "ddgst": false 00:29:23.188 }, 00:29:23.188 "method": "bdev_nvme_attach_controller" 00:29:23.188 },{ 00:29:23.188 "params": { 00:29:23.188 "name": "Nvme4", 00:29:23.188 "trtype": "tcp", 00:29:23.188 "traddr": "10.0.0.2", 00:29:23.188 "adrfam": "ipv4", 00:29:23.188 "trsvcid": "4420", 00:29:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:23.188 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:23.188 "hdgst": false, 00:29:23.188 "ddgst": false 00:29:23.188 }, 00:29:23.188 "method": "bdev_nvme_attach_controller" 00:29:23.188 },{ 00:29:23.188 "params": { 00:29:23.188 "name": "Nvme5", 00:29:23.188 "trtype": "tcp", 00:29:23.188 "traddr": "10.0.0.2", 00:29:23.188 "adrfam": "ipv4", 00:29:23.188 "trsvcid": "4420", 00:29:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:23.188 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:23.188 "hdgst": false, 00:29:23.188 "ddgst": false 00:29:23.188 }, 00:29:23.188 "method": "bdev_nvme_attach_controller" 00:29:23.188 },{ 00:29:23.188 "params": { 00:29:23.188 "name": "Nvme6", 00:29:23.188 "trtype": "tcp", 00:29:23.188 "traddr": "10.0.0.2", 00:29:23.188 "adrfam": "ipv4", 00:29:23.188 "trsvcid": "4420", 00:29:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:23.188 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:23.188 "hdgst": false, 00:29:23.188 "ddgst": false 00:29:23.188 }, 00:29:23.188 "method": "bdev_nvme_attach_controller" 00:29:23.188 },{ 00:29:23.188 "params": { 00:29:23.188 "name": "Nvme7", 00:29:23.188 "trtype": "tcp", 00:29:23.188 "traddr": "10.0.0.2", 00:29:23.188 "adrfam": "ipv4", 00:29:23.188 "trsvcid": "4420", 00:29:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:23.188 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:23.188 "hdgst": false, 00:29:23.188 "ddgst": false 00:29:23.188 }, 00:29:23.188 "method": "bdev_nvme_attach_controller" 00:29:23.188 },{ 00:29:23.188 "params": { 00:29:23.188 "name": "Nvme8", 00:29:23.188 "trtype": "tcp", 00:29:23.188 "traddr": "10.0.0.2", 00:29:23.188 "adrfam": "ipv4", 00:29:23.188 "trsvcid": "4420", 00:29:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:23.188 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:23.188 "hdgst": false, 00:29:23.188 "ddgst": false 00:29:23.188 }, 00:29:23.188 "method": "bdev_nvme_attach_controller" 00:29:23.188 },{ 00:29:23.188 "params": { 00:29:23.188 "name": "Nvme9", 00:29:23.188 "trtype": "tcp", 00:29:23.188 "traddr": "10.0.0.2", 00:29:23.188 "adrfam": "ipv4", 00:29:23.188 "trsvcid": "4420", 00:29:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:23.188 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:23.188 "hdgst": false, 00:29:23.188 "ddgst": false 00:29:23.188 }, 00:29:23.188 "method": "bdev_nvme_attach_controller" 00:29:23.188 },{ 00:29:23.188 "params": { 00:29:23.188 "name": "Nvme10", 00:29:23.188 "trtype": "tcp", 00:29:23.188 "traddr": "10.0.0.2", 00:29:23.188 "adrfam": "ipv4", 00:29:23.188 "trsvcid": "4420", 00:29:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:23.188 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:23.188 "hdgst": false, 00:29:23.188 "ddgst": false 00:29:23.188 }, 00:29:23.188 "method": "bdev_nvme_attach_controller" 00:29:23.188 }' 00:29:23.449 [2024-12-07 11:41:22.578379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.449 [2024-12-07 11:41:22.677531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.834 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.834 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:24.834 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:24.834 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.834 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:24.834 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.834 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2640540 00:29:24.834 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:24.834 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:25.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2640540 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:25.776 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2640145 00:29:25.776 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:25.776 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:25.776 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:25.776 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:25.776 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.776 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.776 { 00:29:25.776 "params": { 00:29:25.776 "name": "Nvme$subsystem", 00:29:25.776 "trtype": "$TEST_TRANSPORT", 00:29:25.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.776 "adrfam": "ipv4", 00:29:25.776 "trsvcid": "$NVMF_PORT", 00:29:25.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.776 "hdgst": ${hdgst:-false}, 00:29:25.776 "ddgst": ${ddgst:-false} 00:29:25.776 }, 00:29:25.776 "method": "bdev_nvme_attach_controller" 00:29:25.776 } 00:29:25.776 EOF 00:29:25.776 )") 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.037 { 00:29:26.037 "params": { 00:29:26.037 "name": "Nvme$subsystem", 00:29:26.037 "trtype": "$TEST_TRANSPORT", 00:29:26.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.037 "adrfam": "ipv4", 00:29:26.037 "trsvcid": "$NVMF_PORT", 00:29:26.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.037 "hdgst": ${hdgst:-false}, 00:29:26.037 "ddgst": ${ddgst:-false} 00:29:26.037 }, 00:29:26.037 "method": "bdev_nvme_attach_controller" 00:29:26.037 } 00:29:26.037 EOF 00:29:26.037 )") 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.037 { 00:29:26.037 "params": { 00:29:26.037 "name": "Nvme$subsystem", 00:29:26.037 "trtype": "$TEST_TRANSPORT", 00:29:26.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.037 "adrfam": "ipv4", 00:29:26.037 "trsvcid": "$NVMF_PORT", 00:29:26.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.037 "hdgst": ${hdgst:-false}, 00:29:26.037 "ddgst": ${ddgst:-false} 00:29:26.037 }, 00:29:26.037 "method": "bdev_nvme_attach_controller" 00:29:26.037 } 00:29:26.037 EOF 00:29:26.037 )") 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.037 { 00:29:26.037 "params": { 00:29:26.037 "name": "Nvme$subsystem", 00:29:26.037 "trtype": "$TEST_TRANSPORT", 00:29:26.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.037 "adrfam": "ipv4", 00:29:26.037 "trsvcid": "$NVMF_PORT", 00:29:26.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.037 "hdgst": ${hdgst:-false}, 00:29:26.037 "ddgst": ${ddgst:-false} 00:29:26.037 }, 00:29:26.037 "method": "bdev_nvme_attach_controller" 00:29:26.037 } 00:29:26.037 EOF 00:29:26.037 )") 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.037 { 00:29:26.037 "params": { 00:29:26.037 "name": "Nvme$subsystem", 00:29:26.037 "trtype": "$TEST_TRANSPORT", 00:29:26.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.037 "adrfam": "ipv4", 00:29:26.037 "trsvcid": "$NVMF_PORT", 00:29:26.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.037 "hdgst": ${hdgst:-false}, 00:29:26.037 "ddgst": ${ddgst:-false} 00:29:26.037 }, 00:29:26.037 "method": "bdev_nvme_attach_controller" 00:29:26.037 } 00:29:26.037 EOF 00:29:26.037 )") 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.037 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.037 { 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme$subsystem", 00:29:26.038 "trtype": "$TEST_TRANSPORT", 00:29:26.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "$NVMF_PORT", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.038 "hdgst": ${hdgst:-false}, 00:29:26.038 "ddgst": ${ddgst:-false} 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 } 00:29:26.038 EOF 00:29:26.038 )") 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.038 { 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme$subsystem", 00:29:26.038 "trtype": "$TEST_TRANSPORT", 00:29:26.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "$NVMF_PORT", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.038 "hdgst": ${hdgst:-false}, 00:29:26.038 "ddgst": ${ddgst:-false} 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 } 00:29:26.038 EOF 00:29:26.038 )") 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.038 { 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme$subsystem", 00:29:26.038 "trtype": "$TEST_TRANSPORT", 00:29:26.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "$NVMF_PORT", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.038 "hdgst": ${hdgst:-false}, 00:29:26.038 "ddgst": ${ddgst:-false} 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 } 00:29:26.038 EOF 00:29:26.038 )") 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.038 { 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme$subsystem", 00:29:26.038 "trtype": "$TEST_TRANSPORT", 00:29:26.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "$NVMF_PORT", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.038 "hdgst": ${hdgst:-false}, 00:29:26.038 "ddgst": ${ddgst:-false} 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 } 00:29:26.038 EOF 00:29:26.038 )") 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.038 { 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme$subsystem", 00:29:26.038 "trtype": "$TEST_TRANSPORT", 00:29:26.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "$NVMF_PORT", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.038 "hdgst": ${hdgst:-false}, 00:29:26.038 "ddgst": ${ddgst:-false} 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 } 00:29:26.038 EOF 00:29:26.038 )") 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:26.038 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme1", 00:29:26.038 "trtype": "tcp", 00:29:26.038 "traddr": "10.0.0.2", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "4420", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:26.038 "hdgst": false, 00:29:26.038 "ddgst": false 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 },{ 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme2", 00:29:26.038 "trtype": "tcp", 00:29:26.038 "traddr": "10.0.0.2", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "4420", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:26.038 "hdgst": false, 00:29:26.038 "ddgst": false 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 },{ 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme3", 00:29:26.038 "trtype": "tcp", 00:29:26.038 "traddr": "10.0.0.2", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "4420", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:26.038 "hdgst": false, 00:29:26.038 "ddgst": false 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 },{ 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme4", 00:29:26.038 "trtype": "tcp", 00:29:26.038 "traddr": "10.0.0.2", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "4420", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:26.038 "hdgst": false, 00:29:26.038 "ddgst": false 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 },{ 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme5", 00:29:26.038 "trtype": "tcp", 00:29:26.038 "traddr": "10.0.0.2", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "4420", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:26.038 "hdgst": false, 00:29:26.038 "ddgst": false 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 },{ 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme6", 00:29:26.038 "trtype": "tcp", 00:29:26.038 "traddr": "10.0.0.2", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "4420", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:26.038 "hdgst": false, 00:29:26.038 "ddgst": false 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 },{ 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme7", 00:29:26.038 "trtype": "tcp", 00:29:26.038 "traddr": "10.0.0.2", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "4420", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:26.038 "hdgst": false, 00:29:26.038 "ddgst": false 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 },{ 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme8", 00:29:26.038 "trtype": "tcp", 00:29:26.038 "traddr": "10.0.0.2", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "4420", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:26.038 "hdgst": false, 00:29:26.038 "ddgst": false 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 },{ 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme9", 00:29:26.038 "trtype": "tcp", 00:29:26.038 "traddr": "10.0.0.2", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "4420", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:26.038 "hdgst": false, 00:29:26.038 "ddgst": false 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 },{ 00:29:26.038 "params": { 00:29:26.038 "name": "Nvme10", 00:29:26.038 "trtype": "tcp", 00:29:26.038 "traddr": "10.0.0.2", 00:29:26.038 "adrfam": "ipv4", 00:29:26.038 "trsvcid": "4420", 00:29:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:26.038 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:26.038 "hdgst": false, 00:29:26.038 "ddgst": false 00:29:26.038 }, 00:29:26.038 "method": "bdev_nvme_attach_controller" 00:29:26.038 }' 00:29:26.038 [2024-12-07 11:41:25.211004] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:26.038 [2024-12-07 11:41:25.211124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641134 ] 00:29:26.038 [2024-12-07 11:41:25.339586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.299 [2024-12-07 11:41:25.437363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.232 Running I/O for 1 seconds... 00:29:29.066 1732.00 IOPS, 108.25 MiB/s 00:29:29.066 Latency(us) 00:29:29.066 [2024-12-07T10:41:28.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.066 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.066 Verification LBA range: start 0x0 length 0x400 00:29:29.066 Nvme1n1 : 1.15 222.87 13.93 0.00 0.00 284158.51 22719.15 267386.88 00:29:29.066 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.066 Verification LBA range: start 0x0 length 0x400 00:29:29.066 Nvme2n1 : 1.15 227.07 14.19 0.00 0.00 270299.03 17257.81 253405.87 00:29:29.066 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.066 Verification LBA range: start 0x0 length 0x400 00:29:29.066 Nvme3n1 : 1.13 226.06 14.13 0.00 0.00 270111.79 23702.19 272629.76 00:29:29.066 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.066 Verification LBA range: start 0x0 length 0x400 00:29:29.066 Nvme4n1 : 1.19 215.42 13.46 0.00 0.00 278921.17 16602.45 284863.15 00:29:29.066 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.066 Verification LBA range: start 0x0 length 0x400 00:29:29.066 Nvme5n1 : 1.14 225.42 14.09 0.00 0.00 260800.21 23046.83 237677.23 00:29:29.066 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.066 Verification LBA range: start 0x0 length 0x400 00:29:29.066 Nvme6n1 : 1.19 214.45 13.40 0.00 0.00 270304.64 17476.27 290106.03 00:29:29.066 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.066 Verification LBA range: start 0x0 length 0x400 00:29:29.066 Nvme7n1 : 1.15 222.06 13.88 0.00 0.00 254589.23 20862.29 260396.37 00:29:29.066 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.066 Verification LBA range: start 0x0 length 0x400 00:29:29.066 Nvme8n1 : 1.23 218.69 13.67 0.00 0.00 241228.77 6062.08 263891.63 00:29:29.066 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.066 Verification LBA range: start 0x0 length 0x400 00:29:29.066 Nvme9n1 : 1.21 265.44 16.59 0.00 0.00 206268.25 5816.32 263891.63 00:29:29.066 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.066 Verification LBA range: start 0x0 length 0x400 00:29:29.066 Nvme10n1 : 1.20 213.53 13.35 0.00 0.00 251599.57 16165.55 288358.40 00:29:29.066 [2024-12-07T10:41:28.420Z] =================================================================================================================== 00:29:29.066 [2024-12-07T10:41:28.420Z] Total : 2251.02 140.69 0.00 0.00 257491.32 5816.32 290106.03 00:29:30.011 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:30.011 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:30.011 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:30.011 rmmod nvme_tcp 00:29:30.011 rmmod nvme_fabrics 00:29:30.011 rmmod nvme_keyring 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2640145 ']' 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2640145 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2640145 ']' 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2640145 00:29:30.011 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:30.012 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:30.012 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2640145 00:29:30.012 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:30.012 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:30.012 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2640145' 00:29:30.012 killing process with pid 2640145 00:29:30.012 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2640145 00:29:30.012 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2640145 00:29:31.401 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:31.401 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:31.401 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:31.401 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:31.401 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:31.401 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:31.401 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:31.401 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:31.401 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:31.401 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.401 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.401 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.951 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:33.951 00:29:33.951 real 0m19.648s 00:29:33.951 user 0m45.229s 00:29:33.951 sys 0m7.138s 00:29:33.951 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:33.951 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:33.951 ************************************ 00:29:33.951 END TEST nvmf_shutdown_tc1 00:29:33.951 ************************************ 00:29:33.951 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:33.951 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:33.951 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.951 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:33.951 ************************************ 00:29:33.951 START TEST nvmf_shutdown_tc2 00:29:33.951 ************************************ 00:29:33.951 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:33.951 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:33.951 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:33.952 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:33.952 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:33.952 Found net devices under 0000:31:00.0: cvl_0_0 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:33.952 Found net devices under 0000:31:00.1: cvl_0_1 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.952 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.953 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.953 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.953 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.953 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.953 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.953 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.953 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.953 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.953 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:33.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:29:33.953 00:29:33.953 --- 10.0.0.2 ping statistics --- 00:29:33.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.953 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:29:33.953 00:29:33.953 --- 10.0.0.1 ping statistics --- 00:29:33.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.953 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2642697 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2642697 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2642697 ']' 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.953 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.953 [2024-12-07 11:41:33.234160] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:33.953 [2024-12-07 11:41:33.234288] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.296 [2024-12-07 11:41:33.391143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.296 [2024-12-07 11:41:33.476353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.296 [2024-12-07 11:41:33.476393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.296 [2024-12-07 11:41:33.476403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.296 [2024-12-07 11:41:33.476411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.296 [2024-12-07 11:41:33.476419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.296 [2024-12-07 11:41:33.478491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.296 [2024-12-07 11:41:33.478632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.296 [2024-12-07 11:41:33.478732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.296 [2024-12-07 11:41:33.478760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:34.932 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.932 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:34.932 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.932 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.932 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.932 [2024-12-07 11:41:34.032129] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.932 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:34.933 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.933 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:34.933 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.933 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:34.933 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.933 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:34.933 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.933 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:34.933 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.933 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:34.933 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:34.933 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.933 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.933 Malloc1 00:29:34.933 [2024-12-07 11:41:34.173736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.933 Malloc2 00:29:35.236 Malloc3 00:29:35.236 Malloc4 00:29:35.236 Malloc5 00:29:35.236 Malloc6 00:29:35.236 Malloc7 00:29:35.498 Malloc8 00:29:35.498 Malloc9 00:29:35.498 Malloc10 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2643058 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2643058 /var/tmp/bdevperf.sock 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2643058 ']' 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:35.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.498 { 00:29:35.498 "params": { 00:29:35.498 "name": "Nvme$subsystem", 00:29:35.498 "trtype": "$TEST_TRANSPORT", 00:29:35.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.498 "adrfam": "ipv4", 00:29:35.498 "trsvcid": "$NVMF_PORT", 00:29:35.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.498 "hdgst": ${hdgst:-false}, 00:29:35.498 "ddgst": ${ddgst:-false} 00:29:35.498 }, 00:29:35.498 "method": "bdev_nvme_attach_controller" 00:29:35.498 } 00:29:35.498 EOF 00:29:35.498 )") 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.498 { 00:29:35.498 "params": { 00:29:35.498 "name": "Nvme$subsystem", 00:29:35.498 "trtype": "$TEST_TRANSPORT", 00:29:35.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.498 "adrfam": "ipv4", 00:29:35.498 "trsvcid": "$NVMF_PORT", 00:29:35.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.498 "hdgst": ${hdgst:-false}, 00:29:35.498 "ddgst": ${ddgst:-false} 00:29:35.498 }, 00:29:35.498 "method": "bdev_nvme_attach_controller" 00:29:35.498 } 00:29:35.498 EOF 00:29:35.498 )") 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.498 { 00:29:35.498 "params": { 00:29:35.498 "name": "Nvme$subsystem", 00:29:35.498 "trtype": "$TEST_TRANSPORT", 00:29:35.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.498 "adrfam": "ipv4", 00:29:35.498 "trsvcid": "$NVMF_PORT", 00:29:35.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.498 "hdgst": ${hdgst:-false}, 00:29:35.498 "ddgst": ${ddgst:-false} 00:29:35.498 }, 00:29:35.498 "method": "bdev_nvme_attach_controller" 00:29:35.498 } 00:29:35.498 EOF 00:29:35.498 )") 00:29:35.498 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:35.760 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.760 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.760 { 00:29:35.760 "params": { 00:29:35.760 "name": "Nvme$subsystem", 00:29:35.760 "trtype": "$TEST_TRANSPORT", 00:29:35.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.760 "adrfam": "ipv4", 00:29:35.760 "trsvcid": "$NVMF_PORT", 00:29:35.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.760 "hdgst": ${hdgst:-false}, 00:29:35.760 "ddgst": ${ddgst:-false} 00:29:35.760 }, 00:29:35.760 "method": "bdev_nvme_attach_controller" 00:29:35.760 } 00:29:35.760 EOF 00:29:35.760 )") 00:29:35.760 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:35.760 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.760 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.760 { 00:29:35.760 "params": { 00:29:35.760 "name": "Nvme$subsystem", 00:29:35.760 "trtype": "$TEST_TRANSPORT", 00:29:35.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.760 "adrfam": "ipv4", 00:29:35.760 "trsvcid": "$NVMF_PORT", 00:29:35.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.760 "hdgst": ${hdgst:-false}, 00:29:35.760 "ddgst": ${ddgst:-false} 00:29:35.760 }, 00:29:35.760 "method": "bdev_nvme_attach_controller" 00:29:35.760 } 00:29:35.760 EOF 00:29:35.760 )") 00:29:35.760 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:35.760 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.760 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.760 { 00:29:35.760 "params": { 00:29:35.761 "name": "Nvme$subsystem", 00:29:35.761 "trtype": "$TEST_TRANSPORT", 00:29:35.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "$NVMF_PORT", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.761 "hdgst": ${hdgst:-false}, 00:29:35.761 "ddgst": ${ddgst:-false} 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 } 00:29:35.761 EOF 00:29:35.761 )") 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.761 { 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme$subsystem", 00:29:35.761 "trtype": "$TEST_TRANSPORT", 00:29:35.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "$NVMF_PORT", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.761 "hdgst": ${hdgst:-false}, 00:29:35.761 "ddgst": ${ddgst:-false} 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 } 00:29:35.761 EOF 00:29:35.761 )") 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.761 { 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme$subsystem", 00:29:35.761 "trtype": "$TEST_TRANSPORT", 00:29:35.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "$NVMF_PORT", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.761 "hdgst": ${hdgst:-false}, 00:29:35.761 "ddgst": ${ddgst:-false} 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 } 00:29:35.761 EOF 00:29:35.761 )") 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.761 { 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme$subsystem", 00:29:35.761 "trtype": "$TEST_TRANSPORT", 00:29:35.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "$NVMF_PORT", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.761 "hdgst": ${hdgst:-false}, 00:29:35.761 "ddgst": ${ddgst:-false} 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 } 00:29:35.761 EOF 00:29:35.761 )") 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.761 { 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme$subsystem", 00:29:35.761 "trtype": "$TEST_TRANSPORT", 00:29:35.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "$NVMF_PORT", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.761 "hdgst": ${hdgst:-false}, 00:29:35.761 "ddgst": ${ddgst:-false} 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 } 00:29:35.761 EOF 00:29:35.761 )") 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:35.761 [2024-12-07 11:41:34.902913] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:35.761 [2024-12-07 11:41:34.903031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2643058 ] 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:35.761 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme1", 00:29:35.761 "trtype": "tcp", 00:29:35.761 "traddr": "10.0.0.2", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "4420", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:35.761 "hdgst": false, 00:29:35.761 "ddgst": false 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 },{ 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme2", 00:29:35.761 "trtype": "tcp", 00:29:35.761 "traddr": "10.0.0.2", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "4420", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:35.761 "hdgst": false, 00:29:35.761 "ddgst": false 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 },{ 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme3", 00:29:35.761 "trtype": "tcp", 00:29:35.761 "traddr": "10.0.0.2", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "4420", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:35.761 "hdgst": false, 00:29:35.761 "ddgst": false 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 },{ 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme4", 00:29:35.761 "trtype": "tcp", 00:29:35.761 "traddr": "10.0.0.2", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "4420", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:35.761 "hdgst": false, 00:29:35.761 "ddgst": false 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 },{ 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme5", 00:29:35.761 "trtype": "tcp", 00:29:35.761 "traddr": "10.0.0.2", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "4420", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:35.761 "hdgst": false, 00:29:35.761 "ddgst": false 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 },{ 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme6", 00:29:35.761 "trtype": "tcp", 00:29:35.761 "traddr": "10.0.0.2", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "4420", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:35.761 "hdgst": false, 00:29:35.761 "ddgst": false 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 },{ 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme7", 00:29:35.761 "trtype": "tcp", 00:29:35.761 "traddr": "10.0.0.2", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "4420", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:35.761 "hdgst": false, 00:29:35.761 "ddgst": false 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 },{ 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme8", 00:29:35.761 "trtype": "tcp", 00:29:35.761 "traddr": "10.0.0.2", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "4420", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:35.761 "hdgst": false, 00:29:35.761 "ddgst": false 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 },{ 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme9", 00:29:35.761 "trtype": "tcp", 00:29:35.761 "traddr": "10.0.0.2", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "4420", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:35.761 "hdgst": false, 00:29:35.761 "ddgst": false 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 },{ 00:29:35.761 "params": { 00:29:35.761 "name": "Nvme10", 00:29:35.761 "trtype": "tcp", 00:29:35.761 "traddr": "10.0.0.2", 00:29:35.761 "adrfam": "ipv4", 00:29:35.761 "trsvcid": "4420", 00:29:35.761 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:35.761 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:35.761 "hdgst": false, 00:29:35.761 "ddgst": false 00:29:35.761 }, 00:29:35.761 "method": "bdev_nvme_attach_controller" 00:29:35.761 }' 00:29:35.761 [2024-12-07 11:41:35.032862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.022 [2024-12-07 11:41:35.131240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.934 Running I/O for 10 seconds... 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:38.194 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2643058 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2643058 ']' 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2643058 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.455 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2643058 00:29:38.716 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:38.716 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:38.716 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2643058' 00:29:38.716 killing process with pid 2643058 00:29:38.716 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2643058 00:29:38.716 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2643058 00:29:38.716 1418.00 IOPS, 88.62 MiB/s [2024-12-07T10:41:38.070Z] Received shutdown signal, test time was about 1.080817 seconds 00:29:38.716 00:29:38.716 Latency(us) 00:29:38.716 [2024-12-07T10:41:38.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.716 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.716 Verification LBA range: start 0x0 length 0x400 00:29:38.716 Nvme1n1 : 1.05 182.31 11.39 0.00 0.00 344982.76 19660.80 333796.69 00:29:38.716 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.716 Verification LBA range: start 0x0 length 0x400 00:29:38.716 Nvme2n1 : 1.06 180.89 11.31 0.00 0.00 337037.94 18459.31 304087.04 00:29:38.716 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.716 Verification LBA range: start 0x0 length 0x400 00:29:38.716 Nvme3n1 : 1.06 186.41 11.65 0.00 0.00 313709.04 3904.85 272629.76 00:29:38.716 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.716 Verification LBA range: start 0x0 length 0x400 00:29:38.716 Nvme4n1 : 1.08 237.94 14.87 0.00 0.00 239853.01 20862.29 263891.63 00:29:38.716 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.716 Verification LBA range: start 0x0 length 0x400 00:29:38.716 Nvme5n1 : 1.02 188.33 11.77 0.00 0.00 291225.32 18022.40 272629.76 00:29:38.716 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.716 Verification LBA range: start 0x0 length 0x400 00:29:38.716 Nvme6n1 : 1.04 184.79 11.55 0.00 0.00 287047.40 23374.51 241172.48 00:29:38.716 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.716 Verification LBA range: start 0x0 length 0x400 00:29:38.716 Nvme7n1 : 1.08 237.06 14.82 0.00 0.00 217138.35 16274.77 272629.76 00:29:38.716 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.716 Verification LBA range: start 0x0 length 0x400 00:29:38.716 Nvme8n1 : 1.05 187.92 11.74 0.00 0.00 260175.63 3713.71 244667.73 00:29:38.716 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.716 Verification LBA range: start 0x0 length 0x400 00:29:38.716 Nvme9n1 : 1.07 179.81 11.24 0.00 0.00 264625.21 15619.41 284863.15 00:29:38.716 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.716 Verification LBA range: start 0x0 length 0x400 00:29:38.716 Nvme10n1 : 1.07 179.10 11.19 0.00 0.00 255649.56 18459.31 291853.65 00:29:38.716 [2024-12-07T10:41:38.070Z] =================================================================================================================== 00:29:38.716 [2024-12-07T10:41:38.070Z] Total : 1944.56 121.54 0.00 0.00 277898.04 3713.71 333796.69 00:29:39.294 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2642697 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:40.680 rmmod nvme_tcp 00:29:40.680 rmmod nvme_fabrics 00:29:40.680 rmmod nvme_keyring 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2642697 ']' 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2642697 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2642697 ']' 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2642697 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2642697 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2642697' 00:29:40.680 killing process with pid 2642697 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2642697 00:29:40.680 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2642697 00:29:42.068 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:42.068 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:42.068 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:42.068 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:42.068 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:42.068 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:42.068 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:42.068 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:42.068 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:42.068 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.068 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.068 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.982 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:43.982 00:29:43.982 real 0m10.570s 00:29:43.982 user 0m34.408s 00:29:43.982 sys 0m1.563s 00:29:43.982 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:43.982 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.982 ************************************ 00:29:43.982 END TEST nvmf_shutdown_tc2 00:29:43.982 ************************************ 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:44.244 ************************************ 00:29:44.244 START TEST nvmf_shutdown_tc3 00:29:44.244 ************************************ 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:44.244 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:44.245 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:44.245 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:44.245 Found net devices under 0000:31:00.0: cvl_0_0 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:44.245 Found net devices under 0000:31:00.1: cvl_0_1 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:44.245 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:44.507 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:44.507 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:44.507 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:44.507 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:44.507 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:44.507 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:44.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:44.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:29:44.508 00:29:44.508 --- 10.0.0.2 ping statistics --- 00:29:44.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.508 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:44.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:44.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:29:44.508 00:29:44.508 --- 10.0.0.1 ping statistics --- 00:29:44.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.508 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2644875 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2644875 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2644875 ']' 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.508 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:44.769 [2024-12-07 11:41:43.896478] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:44.769 [2024-12-07 11:41:43.896609] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.769 [2024-12-07 11:41:44.051000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:45.031 [2024-12-07 11:41:44.135646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.031 [2024-12-07 11:41:44.135685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.031 [2024-12-07 11:41:44.135693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.031 [2024-12-07 11:41:44.135701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.031 [2024-12-07 11:41:44.135708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.031 [2024-12-07 11:41:44.137430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:45.031 [2024-12-07 11:41:44.137571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:45.031 [2024-12-07 11:41:44.137666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.031 [2024-12-07 11:41:44.137693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:45.602 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.602 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:45.602 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:45.602 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:45.602 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:45.602 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:45.603 [2024-12-07 11:41:44.710230] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.603 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:45.603 Malloc1 00:29:45.603 [2024-12-07 11:41:44.852736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.603 Malloc2 00:29:45.863 Malloc3 00:29:45.863 Malloc4 00:29:45.863 Malloc5 00:29:45.863 Malloc6 00:29:46.124 Malloc7 00:29:46.124 Malloc8 00:29:46.124 Malloc9 00:29:46.124 Malloc10 00:29:46.124 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.124 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:46.124 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:46.124 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2645211 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2645211 /var/tmp/bdevperf.sock 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2645211 ']' 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:46.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:46.386 { 00:29:46.386 "params": { 00:29:46.386 "name": "Nvme$subsystem", 00:29:46.386 "trtype": "$TEST_TRANSPORT", 00:29:46.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.386 "adrfam": "ipv4", 00:29:46.386 "trsvcid": "$NVMF_PORT", 00:29:46.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.386 "hdgst": ${hdgst:-false}, 00:29:46.386 "ddgst": ${ddgst:-false} 00:29:46.386 }, 00:29:46.386 "method": "bdev_nvme_attach_controller" 00:29:46.386 } 00:29:46.386 EOF 00:29:46.386 )") 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:46.386 { 00:29:46.386 "params": { 00:29:46.386 "name": "Nvme$subsystem", 00:29:46.386 "trtype": "$TEST_TRANSPORT", 00:29:46.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.386 "adrfam": "ipv4", 00:29:46.386 "trsvcid": "$NVMF_PORT", 00:29:46.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.386 "hdgst": ${hdgst:-false}, 00:29:46.386 "ddgst": ${ddgst:-false} 00:29:46.386 }, 00:29:46.386 "method": "bdev_nvme_attach_controller" 00:29:46.386 } 00:29:46.386 EOF 00:29:46.386 )") 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:46.386 { 00:29:46.386 "params": { 00:29:46.386 "name": "Nvme$subsystem", 00:29:46.386 "trtype": "$TEST_TRANSPORT", 00:29:46.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.386 "adrfam": "ipv4", 00:29:46.386 "trsvcid": "$NVMF_PORT", 00:29:46.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.386 "hdgst": ${hdgst:-false}, 00:29:46.386 "ddgst": ${ddgst:-false} 00:29:46.386 }, 00:29:46.386 "method": "bdev_nvme_attach_controller" 00:29:46.386 } 00:29:46.386 EOF 00:29:46.386 )") 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:46.386 { 00:29:46.386 "params": { 00:29:46.386 "name": "Nvme$subsystem", 00:29:46.386 "trtype": "$TEST_TRANSPORT", 00:29:46.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.386 "adrfam": "ipv4", 00:29:46.386 "trsvcid": "$NVMF_PORT", 00:29:46.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.386 "hdgst": ${hdgst:-false}, 00:29:46.386 "ddgst": ${ddgst:-false} 00:29:46.386 }, 00:29:46.386 "method": "bdev_nvme_attach_controller" 00:29:46.386 } 00:29:46.386 EOF 00:29:46.386 )") 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:46.386 { 00:29:46.386 "params": { 00:29:46.386 "name": "Nvme$subsystem", 00:29:46.386 "trtype": "$TEST_TRANSPORT", 00:29:46.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.386 "adrfam": "ipv4", 00:29:46.386 "trsvcid": "$NVMF_PORT", 00:29:46.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.386 "hdgst": ${hdgst:-false}, 00:29:46.386 "ddgst": ${ddgst:-false} 00:29:46.386 }, 00:29:46.386 "method": "bdev_nvme_attach_controller" 00:29:46.386 } 00:29:46.386 EOF 00:29:46.386 )") 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:46.386 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:46.386 { 00:29:46.386 "params": { 00:29:46.386 "name": "Nvme$subsystem", 00:29:46.386 "trtype": "$TEST_TRANSPORT", 00:29:46.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.386 "adrfam": "ipv4", 00:29:46.386 "trsvcid": "$NVMF_PORT", 00:29:46.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.386 "hdgst": ${hdgst:-false}, 00:29:46.386 "ddgst": ${ddgst:-false} 00:29:46.386 }, 00:29:46.386 "method": "bdev_nvme_attach_controller" 00:29:46.386 } 00:29:46.387 EOF 00:29:46.387 )") 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:46.387 { 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme$subsystem", 00:29:46.387 "trtype": "$TEST_TRANSPORT", 00:29:46.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "$NVMF_PORT", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.387 "hdgst": ${hdgst:-false}, 00:29:46.387 "ddgst": ${ddgst:-false} 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 } 00:29:46.387 EOF 00:29:46.387 )") 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:46.387 { 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme$subsystem", 00:29:46.387 "trtype": "$TEST_TRANSPORT", 00:29:46.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "$NVMF_PORT", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.387 "hdgst": ${hdgst:-false}, 00:29:46.387 "ddgst": ${ddgst:-false} 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 } 00:29:46.387 EOF 00:29:46.387 )") 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:46.387 { 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme$subsystem", 00:29:46.387 "trtype": "$TEST_TRANSPORT", 00:29:46.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "$NVMF_PORT", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.387 "hdgst": ${hdgst:-false}, 00:29:46.387 "ddgst": ${ddgst:-false} 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 } 00:29:46.387 EOF 00:29:46.387 )") 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:46.387 { 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme$subsystem", 00:29:46.387 "trtype": "$TEST_TRANSPORT", 00:29:46.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "$NVMF_PORT", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.387 "hdgst": ${hdgst:-false}, 00:29:46.387 "ddgst": ${ddgst:-false} 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 } 00:29:46.387 EOF 00:29:46.387 )") 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:46.387 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme1", 00:29:46.387 "trtype": "tcp", 00:29:46.387 "traddr": "10.0.0.2", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "4420", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:46.387 "hdgst": false, 00:29:46.387 "ddgst": false 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 },{ 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme2", 00:29:46.387 "trtype": "tcp", 00:29:46.387 "traddr": "10.0.0.2", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "4420", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:46.387 "hdgst": false, 00:29:46.387 "ddgst": false 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 },{ 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme3", 00:29:46.387 "trtype": "tcp", 00:29:46.387 "traddr": "10.0.0.2", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "4420", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:46.387 "hdgst": false, 00:29:46.387 "ddgst": false 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 },{ 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme4", 00:29:46.387 "trtype": "tcp", 00:29:46.387 "traddr": "10.0.0.2", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "4420", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:46.387 "hdgst": false, 00:29:46.387 "ddgst": false 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 },{ 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme5", 00:29:46.387 "trtype": "tcp", 00:29:46.387 "traddr": "10.0.0.2", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "4420", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:46.387 "hdgst": false, 00:29:46.387 "ddgst": false 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 },{ 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme6", 00:29:46.387 "trtype": "tcp", 00:29:46.387 "traddr": "10.0.0.2", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "4420", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:46.387 "hdgst": false, 00:29:46.387 "ddgst": false 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 },{ 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme7", 00:29:46.387 "trtype": "tcp", 00:29:46.387 "traddr": "10.0.0.2", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "4420", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:46.387 "hdgst": false, 00:29:46.387 "ddgst": false 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 },{ 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme8", 00:29:46.387 "trtype": "tcp", 00:29:46.387 "traddr": "10.0.0.2", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "4420", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:46.387 "hdgst": false, 00:29:46.387 "ddgst": false 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 },{ 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme9", 00:29:46.387 "trtype": "tcp", 00:29:46.387 "traddr": "10.0.0.2", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "4420", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:46.387 "hdgst": false, 00:29:46.387 "ddgst": false 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 },{ 00:29:46.387 "params": { 00:29:46.387 "name": "Nvme10", 00:29:46.387 "trtype": "tcp", 00:29:46.387 "traddr": "10.0.0.2", 00:29:46.387 "adrfam": "ipv4", 00:29:46.387 "trsvcid": "4420", 00:29:46.387 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:46.387 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:46.387 "hdgst": false, 00:29:46.387 "ddgst": false 00:29:46.387 }, 00:29:46.387 "method": "bdev_nvme_attach_controller" 00:29:46.387 }' 00:29:46.387 [2024-12-07 11:41:45.582529] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:46.387 [2024-12-07 11:41:45.582637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2645211 ] 00:29:46.387 [2024-12-07 11:41:45.712020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.647 [2024-12-07 11:41:45.810080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.564 Running I/O for 10 seconds... 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.826 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:48.827 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:48.827 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:49.088 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:49.088 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:49.088 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:49.088 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:49.088 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.088 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.088 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.089 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:49.089 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:49.089 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:49.089 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:49.089 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:49.089 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2644875 00:29:49.089 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2644875 ']' 00:29:49.089 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2644875 00:29:49.089 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:49.089 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.089 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2644875 00:29:49.368 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:49.368 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:49.368 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2644875' 00:29:49.368 killing process with pid 2644875 00:29:49.368 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2644875 00:29:49.368 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2644875 00:29:49.368 [2024-12-07 11:41:48.477023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.368 [2024-12-07 11:41:48.477189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.477476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.369 [2024-12-07 11:41:48.481793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.481928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.483993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.370 [2024-12-07 11:41:48.484183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.484190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.484197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.485694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.486705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.371 [2024-12-07 11:41:48.486757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.371 [2024-12-07 11:41:48.486778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.371 [2024-12-07 11:41:48.486793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.371 [2024-12-07 11:41:48.486805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.371 [2024-12-07 11:41:48.486816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.371 [2024-12-07 11:41:48.486828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.371 [2024-12-07 11:41:48.486838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.371 [2024-12-07 11:41:48.486849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3e80 is same with the state(6) to be set 00:29:49.371 [2024-12-07 11:41:48.486905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.371 [2024-12-07 11:41:48.486919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.371 [2024-12-07 11:41:48.486931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.371 [2024-12-07 11:41:48.486941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.371 [2024-12-07 11:41:48.486953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.371 [2024-12-07 11:41:48.486964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.371 [2024-12-07 11:41:48.486976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.371 [2024-12-07 11:41:48.486987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.486997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0280 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0c80 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-07 11:41:48.487350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same id:0 cdw10:00000000 cdw11:00000000 00:29:49.372 with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-07 11:41:48.487388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039ec00 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.372 [2024-12-07 11:41:48.487534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.372 [2024-12-07 11:41:48.487541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039f600 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.372 [2024-12-07 11:41:48.487600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.487729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:49.373 [2024-12-07 11:41:48.488007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.373 [2024-12-07 11:41:48.488597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.373 [2024-12-07 11:41:48.488609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.488984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.488994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.374 [2024-12-07 11:41:48.489526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.374 [2024-12-07 11:41:48.489539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.375 [2024-12-07 11:41:48.489549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.375 [2024-12-07 11:41:48.489562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.375 [2024-12-07 11:41:48.489573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.375 [2024-12-07 11:41:48.489617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.375 [2024-12-07 11:41:48.489675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.375 [2024-12-07 11:41:48.489883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-07 11:41:48.489896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.375 with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.375 [2024-12-07 11:41:48.489927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.375 [2024-12-07 11:41:48.489934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.375 [2024-12-07 11:41:48.489948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.375 [2024-12-07 11:41:48.489962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.375 [2024-12-07 11:41:48.489975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.375 [2024-12-07 11:41:48.489990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.489997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.375 [2024-12-07 11:41:48.490005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.490015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.490009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.375 [2024-12-07 11:41:48.490022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.490030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.490029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.375 [2024-12-07 11:41:48.490037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.490041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-07 11:41:48.490043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.375 with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.490055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.490060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:12[2024-12-07 11:41:48.490061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.375 with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.490070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.490072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.375 [2024-12-07 11:41:48.490076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.490084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.490085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.375 [2024-12-07 11:41:48.490091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.490097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-12-07 11:41:48.490097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:49.375 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.375 [2024-12-07 11:41:48.490106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.375 [2024-12-07 11:41:48.490112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:12[2024-12-07 11:41:48.490113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 with the state(6) to be set 00:29:49.376 [2024-12-07 11:41:48.490122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.376 [2024-12-07 11:41:48.490124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:49.376 [2024-12-07 11:41:48.490137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.490976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.490987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.491001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.376 [2024-12-07 11:41:48.491016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.376 [2024-12-07 11:41:48.491029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.377 [2024-12-07 11:41:48.491040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.377 [2024-12-07 11:41:48.491053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.377 [2024-12-07 11:41:48.491064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.377 [2024-12-07 11:41:48.491384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.491821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.492635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.377 [2024-12-07 11:41:48.492655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.492999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.493005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.493015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.493022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.493033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.493039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.493046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.493053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.493059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:49.378 [2024-12-07 11:41:48.504260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.378 [2024-12-07 11:41:48.504303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.378 [2024-12-07 11:41:48.504320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.378 [2024-12-07 11:41:48.504332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.378 [2024-12-07 11:41:48.504346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.378 [2024-12-07 11:41:48.504359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.378 [2024-12-07 11:41:48.504373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.378 [2024-12-07 11:41:48.504384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.378 [2024-12-07 11:41:48.504403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.378 [2024-12-07 11:41:48.504414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.378 [2024-12-07 11:41:48.504428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.378 [2024-12-07 11:41:48.504438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.378 [2024-12-07 11:41:48.504452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.378 [2024-12-07 11:41:48.504463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.378 [2024-12-07 11:41:48.504478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.378 [2024-12-07 11:41:48.504490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.378 [2024-12-07 11:41:48.504504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.378 [2024-12-07 11:41:48.504514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.378 [2024-12-07 11:41:48.504528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.378 [2024-12-07 11:41:48.504539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.378 [2024-12-07 11:41:48.504552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.504563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.504576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.504587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.504601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.504611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.504625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.504635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.504648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.504659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.504672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.504684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.504978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-12-07 11:41:48.505865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.379 [2024-12-07 11:41:48.505876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.505888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.505900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.505913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.505923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.505938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.505949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.505962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.505973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.505987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.505999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.506596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.380 [2024-12-07 11:41:48.506608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.507256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3e80 (9): Bad file descriptor 00:29:49.380 [2024-12-07 11:41:48.507293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0280 (9): Bad file descriptor 00:29:49.380 [2024-12-07 11:41:48.507343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.380 [2024-12-07 11:41:48.507360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.507374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.380 [2024-12-07 11:41:48.507384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.507396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.380 [2024-12-07 11:41:48.507407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.507419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.380 [2024-12-07 11:41:48.507430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.507441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a2a80 is same with the state(6) to be set 00:29:49.380 [2024-12-07 11:41:48.507482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.380 [2024-12-07 11:41:48.507496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.507508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.380 [2024-12-07 11:41:48.507519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.507530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.380 [2024-12-07 11:41:48.507541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-12-07 11:41:48.507552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.380 [2024-12-07 11:41:48.507563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.507573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3480 is same with the state(6) to be set 00:29:49.381 [2024-12-07 11:41:48.507607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.381 [2024-12-07 11:41:48.507620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.507632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.381 [2024-12-07 11:41:48.507643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.507663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.381 [2024-12-07 11:41:48.507673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.507686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.381 [2024-12-07 11:41:48.507696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.507707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a1680 is same with the state(6) to be set 00:29:49.381 [2024-12-07 11:41:48.507742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.381 [2024-12-07 11:41:48.507755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.507767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.381 [2024-12-07 11:41:48.507778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.507789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.381 [2024-12-07 11:41:48.507800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.507812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.381 [2024-12-07 11:41:48.507823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.507833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a2080 is same with the state(6) to be set 00:29:49.381 [2024-12-07 11:41:48.507855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0c80 (9): Bad file descriptor 00:29:49.381 [2024-12-07 11:41:48.507879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:29:49.381 [2024-12-07 11:41:48.507898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039ec00 (9): Bad file descriptor 00:29:49.381 [2024-12-07 11:41:48.507914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039f600 (9): Bad file descriptor 00:29:49.381 [2024-12-07 11:41:48.511940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:49.381 [2024-12-07 11:41:48.511985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:49.381 [2024-12-07 11:41:48.512002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:49.381 [2024-12-07 11:41:48.512679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.381 [2024-12-07 11:41:48.512717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:29:49.381 [2024-12-07 11:41:48.512731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:29:49.381 [2024-12-07 11:41:48.513264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.381 [2024-12-07 11:41:48.513311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:29:49.381 [2024-12-07 11:41:48.513327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039ec00 is same with the state(6) to be set 00:29:49.381 [2024-12-07 11:41:48.513667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.381 [2024-12-07 11:41:48.513684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f600 with addr=10.0.0.2, port=4420 00:29:49.381 [2024-12-07 11:41:48.513695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039f600 is same with the state(6) to be set 00:29:49.381 [2024-12-07 11:41:48.515004] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:49.381 [2024-12-07 11:41:48.515081] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:49.381 [2024-12-07 11:41:48.515290] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:49.381 [2024-12-07 11:41:48.515321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:29:49.381 [2024-12-07 11:41:48.515340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039ec00 (9): Bad file descriptor 00:29:49.381 [2024-12-07 11:41:48.515353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039f600 (9): Bad file descriptor 00:29:49.381 [2024-12-07 11:41:48.515707] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:49.381 [2024-12-07 11:41:48.515759] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:49.381 [2024-12-07 11:41:48.515800] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:49.381 [2024-12-07 11:41:48.515841] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:49.381 [2024-12-07 11:41:48.515869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:49.381 [2024-12-07 11:41:48.515883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:49.381 [2024-12-07 11:41:48.515898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:49.381 [2024-12-07 11:41:48.515914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:49.381 [2024-12-07 11:41:48.515929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:49.381 [2024-12-07 11:41:48.515939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:49.381 [2024-12-07 11:41:48.515949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:49.381 [2024-12-07 11:41:48.515959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:49.381 [2024-12-07 11:41:48.515970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:49.381 [2024-12-07 11:41:48.515980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:49.381 [2024-12-07 11:41:48.515990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:49.381 [2024-12-07 11:41:48.516000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:49.381 [2024-12-07 11:41:48.517277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a2a80 (9): Bad file descriptor 00:29:49.381 [2024-12-07 11:41:48.517322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3480 (9): Bad file descriptor 00:29:49.381 [2024-12-07 11:41:48.517354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1680 (9): Bad file descriptor 00:29:49.381 [2024-12-07 11:41:48.517376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a2080 (9): Bad file descriptor 00:29:49.381 [2024-12-07 11:41:48.517527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.381 [2024-12-07 11:41:48.517550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.517576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.381 [2024-12-07 11:41:48.517588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.517602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.381 [2024-12-07 11:41:48.517612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.517626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.381 [2024-12-07 11:41:48.517637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.517651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.381 [2024-12-07 11:41:48.517661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.517674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.381 [2024-12-07 11:41:48.517686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.517699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.381 [2024-12-07 11:41:48.517710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.517724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.381 [2024-12-07 11:41:48.517735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.517748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.381 [2024-12-07 11:41:48.517760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.517773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.381 [2024-12-07 11:41:48.517785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.517799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.381 [2024-12-07 11:41:48.517809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.381 [2024-12-07 11:41:48.517823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.381 [2024-12-07 11:41:48.517835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.517847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.517859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.517874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.517886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.517900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.517911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.517924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.517935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.517949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.517960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.517973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.517985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.517997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.382 [2024-12-07 11:41:48.518736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.382 [2024-12-07 11:41:48.518747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.518762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.518774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.518787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.518799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.518813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.518824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.518838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.518850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.518865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.518876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.518890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.518901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.518916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.518927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.518940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.518952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.518965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.518977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.518991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.519002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.519019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.519031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.519043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.519055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.519069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.519080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.519094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.519105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.519120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.519131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.519144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.519156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.519169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.519182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.519195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a6680 is same with the state(6) to be set 00:29:49.383 [2024-12-07 11:41:48.520701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.520723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.520740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.520751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.520764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.520775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.520788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.520799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.520813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.520823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.520836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.520847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.520860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.520871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.520883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.520894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.520908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.520919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.520933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.520945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.520958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.520970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.520982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.520997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.521015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.521027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.521041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.521052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.521065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.521076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.521090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.521101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.521114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.521126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.521139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.521150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.521164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.521175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.521189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.521200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.521213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.521224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.383 [2024-12-07 11:41:48.521238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.383 [2024-12-07 11:41:48.521249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.521982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.521992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.522005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.522020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.522032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.522042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.522055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.522066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.522078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.522088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.522101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.522111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.522124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.522134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.522146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.384 [2024-12-07 11:41:48.522156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.384 [2024-12-07 11:41:48.522169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.522182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.522195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.522205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.522218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.522229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.522240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a6900 is same with the state(6) to be set 00:29:49.385 [2024-12-07 11:41:48.523757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.523776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.523793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.523805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.523820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.523832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.523846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.523857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.523871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.523881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.523895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.523906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.523920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.523931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.523945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.523957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.523971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.523982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.523995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.385 [2024-12-07 11:41:48.524673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.385 [2024-12-07 11:41:48.524687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.524698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.524711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.524722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.524735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.524746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.524758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.524770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.524783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.524793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.524807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.524819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.524833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.524844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.524857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.524868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.524882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.524892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.524905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.524916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.524929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.524940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.524955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.524966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.524979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.524990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.386 [2024-12-07 11:41:48.525359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.386 [2024-12-07 11:41:48.525372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a7580 is same with the state(6) to be set 00:29:49.386 [2024-12-07 11:41:48.526847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:49.386 [2024-12-07 11:41:48.526871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:49.386 [2024-12-07 11:41:48.526887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:49.386 task offset: 26624 on job bdev=Nvme1n1 fails 00:29:49.386 1607.85 IOPS, 100.49 MiB/s [2024-12-07T10:41:48.740Z] [2024-12-07 11:41:48.545018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.386 [2024-12-07 11:41:48.545068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0280 with addr=10.0.0.2, port=4420 00:29:49.386 [2024-12-07 11:41:48.545086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0280 is same with the state(6) to be set 00:29:49.386 [2024-12-07 11:41:48.545302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.386 [2024-12-07 11:41:48.545320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0c80 with addr=10.0.0.2, port=4420 00:29:49.386 [2024-12-07 11:41:48.545331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0c80 is same with the state(6) to be set 00:29:49.386 [2024-12-07 11:41:48.545506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.386 [2024-12-07 11:41:48.545523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3e80 with addr=10.0.0.2, port=4420 00:29:49.386 [2024-12-07 11:41:48.545535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3e80 is same with the state(6) to be set 00:29:49.386 [2024-12-07 11:41:48.545608] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:49.387 [2024-12-07 11:41:48.545629] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:49.387 [2024-12-07 11:41:48.545644] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:49.387 [2024-12-07 11:41:48.545662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3e80 (9): Bad file descriptor 00:29:49.387 [2024-12-07 11:41:48.545681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0c80 (9): Bad file descriptor 00:29:49.387 [2024-12-07 11:41:48.545705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0280 (9): Bad file descriptor 00:29:49.387 [2024-12-07 11:41:48.547032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:49.387 [2024-12-07 11:41:48.547064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:49.387 [2024-12-07 11:41:48.547099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:49.387 [2024-12-07 11:41:48.547372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.547979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.547997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.548008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.548163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.548175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.548188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.548199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.548213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.548224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.548237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.548248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.548263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.548274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.548288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.548298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.548311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.548323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.548336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.548347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.548360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.548371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.548385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.387 [2024-12-07 11:41:48.548395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.387 [2024-12-07 11:41:48.548408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.548984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.548996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.549006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.549024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.549034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.549049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.549060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.549073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.549085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.549098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.549109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.549121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a6b80 is same with the state(6) to be set 00:29:49.388 [2024-12-07 11:41:48.550623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.550645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.550664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.550675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.550689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.550700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.550714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.550724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.550738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.550749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.550762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.550773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.550786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.550797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.550809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.550821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.550833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.550844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.550860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.550871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.388 [2024-12-07 11:41:48.550885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.388 [2024-12-07 11:41:48.550896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.550908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.550920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.550933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.550944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.550957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.550967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.550981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.550991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.389 [2024-12-07 11:41:48.551851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.389 [2024-12-07 11:41:48.551864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.551876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.551888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.551899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.551912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.551923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.551936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.551947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.551960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.551971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.551984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.551994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.552007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.552022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.552035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.552047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.552060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.552071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.552085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.552097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.552110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.552120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.552134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.552144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.552157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.552167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.552181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.552191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.552202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a6e00 is same with the state(6) to be set 00:29:49.390 [2024-12-07 11:41:48.553692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.553711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.553727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.553739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.553754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.553766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.553779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.553791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.553804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.553815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.553828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.553839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.553853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.553863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.553880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.553891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.553905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.553915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.553929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.553939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.553952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.553963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.553976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.553987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.554015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.554040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.554064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.554088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.554113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.554137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.554161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.554187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.554212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.554236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.554260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.554290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.390 [2024-12-07 11:41:48.554315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.390 [2024-12-07 11:41:48.554328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.554983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.554994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.555007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.555022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.555035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.555046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.555059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.555070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.555083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.555094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.555109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.555121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.555134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.555145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.555158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.391 [2024-12-07 11:41:48.555168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.391 [2024-12-07 11:41:48.555182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.555192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.555206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.555216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.555229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.555241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.555254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.555265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.555277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a7080 is same with the state(6) to be set 00:29:49.392 [2024-12-07 11:41:48.556768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.556787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.556803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.556815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.556828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.556841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.556854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.556865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.556878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.556890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.556906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.556918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.556931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.556942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.556955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.556966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.556978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.556990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.392 [2024-12-07 11:41:48.557627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.392 [2024-12-07 11:41:48.557640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.557983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.557996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.393 [2024-12-07 11:41:48.558335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.393 [2024-12-07 11:41:48.558347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a7300 is same with the state(6) to be set 00:29:49.393 [2024-12-07 11:41:48.562276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:49.393 [2024-12-07 11:41:48.562308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:49.393 [2024-12-07 11:41:48.562324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:49.393 00:29:49.393 Latency(us) 00:29:49.393 [2024-12-07T10:41:48.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.393 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:49.393 Job: Nvme1n1 ended in about 0.99 seconds with error 00:29:49.393 Verification LBA range: start 0x0 length 0x400 00:29:49.393 Nvme1n1 : 0.99 194.03 12.13 64.68 0.00 244571.31 20206.93 270882.13 00:29:49.393 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:49.393 Job: Nvme2n1 ended in about 0.99 seconds with error 00:29:49.393 Verification LBA range: start 0x0 length 0x400 00:29:49.393 Nvme2n1 : 0.99 193.80 12.11 64.60 0.00 239907.20 22391.47 290106.03 00:29:49.393 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:49.393 Job: Nvme3n1 ended in about 0.99 seconds with error 00:29:49.393 Verification LBA range: start 0x0 length 0x400 00:29:49.393 Nvme3n1 : 0.99 193.55 12.10 64.52 0.00 235316.91 18350.08 256901.12 00:29:49.393 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:49.393 Job: Nvme4n1 ended in about 1.00 seconds with error 00:29:49.393 Verification LBA range: start 0x0 length 0x400 00:29:49.393 Nvme4n1 : 1.00 195.84 12.24 63.95 0.00 229037.82 12724.91 272629.76 00:29:49.393 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:49.393 Job: Nvme5n1 ended in about 1.00 seconds with error 00:29:49.393 Verification LBA range: start 0x0 length 0x400 00:29:49.393 Nvme5n1 : 1.00 127.51 7.97 63.75 0.00 304735.86 20316.16 288358.40 00:29:49.393 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:49.393 Job: Nvme6n1 ended in about 1.03 seconds with error 00:29:49.393 Verification LBA range: start 0x0 length 0x400 00:29:49.393 Nvme6n1 : 1.03 124.19 7.76 62.10 0.00 307080.82 17913.17 272629.76 00:29:49.393 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:49.393 Job: Nvme7n1 ended in about 1.03 seconds with error 00:29:49.393 Verification LBA range: start 0x0 length 0x400 00:29:49.393 Nvme7n1 : 1.03 123.82 7.74 61.91 0.00 301705.67 51991.89 279620.27 00:29:49.393 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:49.393 Job: Nvme8n1 ended in about 1.04 seconds with error 00:29:49.393 Verification LBA range: start 0x0 length 0x400 00:29:49.394 Nvme8n1 : 1.04 185.18 11.57 61.73 0.00 221986.56 16274.77 274377.39 00:29:49.394 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:49.394 Job: Nvme9n1 ended in about 1.04 seconds with error 00:29:49.394 Verification LBA range: start 0x0 length 0x400 00:29:49.394 Nvme9n1 : 1.04 123.09 7.69 61.55 0.00 290602.10 35389.44 283115.52 00:29:49.394 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:49.394 Job: Nvme10n1 ended in about 1.01 seconds with error 00:29:49.394 Verification LBA range: start 0x0 length 0x400 00:29:49.394 Nvme10n1 : 1.01 127.11 7.94 63.56 0.00 273010.35 18131.63 290106.03 00:29:49.394 [2024-12-07T10:41:48.748Z] =================================================================================================================== 00:29:49.394 [2024-12-07T10:41:48.748Z] Total : 1588.14 99.26 632.33 0.00 260363.59 12724.91 290106.03 00:29:49.394 [2024-12-07 11:41:48.634459] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:49.394 [2024-12-07 11:41:48.634509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:49.394 [2024-12-07 11:41:48.634825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.394 [2024-12-07 11:41:48.634850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f600 with addr=10.0.0.2, port=4420 00:29:49.394 [2024-12-07 11:41:48.634865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039f600 is same with the state(6) to be set 00:29:49.394 [2024-12-07 11:41:48.635155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.394 [2024-12-07 11:41:48.635172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:29:49.394 [2024-12-07 11:41:48.635183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039ec00 is same with the state(6) to be set 00:29:49.394 [2024-12-07 11:41:48.635365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.394 [2024-12-07 11:41:48.635379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:29:49.394 [2024-12-07 11:41:48.635389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:29:49.394 [2024-12-07 11:41:48.635404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:49.394 [2024-12-07 11:41:48.635415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:49.394 [2024-12-07 11:41:48.635428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:49.394 [2024-12-07 11:41:48.635442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:49.394 [2024-12-07 11:41:48.635455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:49.394 [2024-12-07 11:41:48.635465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:49.394 [2024-12-07 11:41:48.635474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:49.394 [2024-12-07 11:41:48.635484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:49.394 [2024-12-07 11:41:48.635495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:49.394 [2024-12-07 11:41:48.635504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:49.394 [2024-12-07 11:41:48.635513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:49.394 [2024-12-07 11:41:48.635522] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:49.394 [2024-12-07 11:41:48.635590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:29:49.394 [2024-12-07 11:41:48.635612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039ec00 (9): Bad file descriptor 00:29:49.394 [2024-12-07 11:41:48.635628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039f600 (9): Bad file descriptor 00:29:49.394 [2024-12-07 11:41:48.636168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.394 [2024-12-07 11:41:48.636191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a1680 with addr=10.0.0.2, port=4420 00:29:49.394 [2024-12-07 11:41:48.636203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a1680 is same with the state(6) to be set 00:29:49.394 [2024-12-07 11:41:48.636551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.394 [2024-12-07 11:41:48.636566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a2080 with addr=10.0.0.2, port=4420 00:29:49.394 [2024-12-07 11:41:48.636576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a2080 is same with the state(6) to be set 00:29:49.394 [2024-12-07 11:41:48.636908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.394 [2024-12-07 11:41:48.636923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a2a80 with addr=10.0.0.2, port=4420 00:29:49.394 [2024-12-07 11:41:48.636934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a2a80 is same with the state(6) to be set 00:29:49.394 [2024-12-07 11:41:48.637270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.394 [2024-12-07 11:41:48.637286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3480 with addr=10.0.0.2, port=4420 00:29:49.394 [2024-12-07 11:41:48.637296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3480 is same with the state(6) to be set 00:29:49.394 [2024-12-07 11:41:48.637324] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:49.394 [2024-12-07 11:41:48.637344] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:49.394 [2024-12-07 11:41:48.637359] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:49.394 [2024-12-07 11:41:48.637382] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:49.394 [2024-12-07 11:41:48.637397] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:49.394 [2024-12-07 11:41:48.637411] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:49.394 [2024-12-07 11:41:48.639166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:49.394 [2024-12-07 11:41:48.639197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:49.394 [2024-12-07 11:41:48.639209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:49.394 [2024-12-07 11:41:48.639276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1680 (9): Bad file descriptor 00:29:49.394 [2024-12-07 11:41:48.639295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a2080 (9): Bad file descriptor 00:29:49.394 [2024-12-07 11:41:48.639309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a2a80 (9): Bad file descriptor 00:29:49.394 [2024-12-07 11:41:48.639323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3480 (9): Bad file descriptor 00:29:49.394 [2024-12-07 11:41:48.639334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:49.394 [2024-12-07 11:41:48.639344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:49.394 [2024-12-07 11:41:48.639354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:49.394 [2024-12-07 11:41:48.639365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:49.394 [2024-12-07 11:41:48.639376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:49.394 [2024-12-07 11:41:48.639385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:49.394 [2024-12-07 11:41:48.639394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:49.394 [2024-12-07 11:41:48.639403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:49.394 [2024-12-07 11:41:48.639413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:49.394 [2024-12-07 11:41:48.639423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:49.394 [2024-12-07 11:41:48.639432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:49.394 [2024-12-07 11:41:48.639443] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:49.394 [2024-12-07 11:41:48.640031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.394 [2024-12-07 11:41:48.640057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3e80 with addr=10.0.0.2, port=4420 00:29:49.394 [2024-12-07 11:41:48.640068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3e80 is same with the state(6) to be set 00:29:49.394 [2024-12-07 11:41:48.640349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.394 [2024-12-07 11:41:48.640368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0c80 with addr=10.0.0.2, port=4420 00:29:49.394 [2024-12-07 11:41:48.640378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0c80 is same with the state(6) to be set 00:29:49.394 [2024-12-07 11:41:48.640712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.394 [2024-12-07 11:41:48.640727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0280 with addr=10.0.0.2, port=4420 00:29:49.394 [2024-12-07 11:41:48.640738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0280 is same with the state(6) to be set 00:29:49.394 [2024-12-07 11:41:48.640749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:49.394 [2024-12-07 11:41:48.640758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:49.394 [2024-12-07 11:41:48.640768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:49.394 [2024-12-07 11:41:48.640778] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:49.394 [2024-12-07 11:41:48.640788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:49.394 [2024-12-07 11:41:48.640798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:49.394 [2024-12-07 11:41:48.640807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:49.394 [2024-12-07 11:41:48.640816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:49.395 [2024-12-07 11:41:48.640826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:49.395 [2024-12-07 11:41:48.640835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:49.395 [2024-12-07 11:41:48.640845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:49.395 [2024-12-07 11:41:48.640855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:49.395 [2024-12-07 11:41:48.640865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:49.395 [2024-12-07 11:41:48.640873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:49.395 [2024-12-07 11:41:48.640882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:49.395 [2024-12-07 11:41:48.640891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:49.395 [2024-12-07 11:41:48.640981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3e80 (9): Bad file descriptor 00:29:49.395 [2024-12-07 11:41:48.640999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0c80 (9): Bad file descriptor 00:29:49.395 [2024-12-07 11:41:48.641017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0280 (9): Bad file descriptor 00:29:49.395 [2024-12-07 11:41:48.641056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:49.395 [2024-12-07 11:41:48.641067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:49.395 [2024-12-07 11:41:48.641078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:49.395 [2024-12-07 11:41:48.641088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:49.395 [2024-12-07 11:41:48.641098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:49.395 [2024-12-07 11:41:48.641110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:49.395 [2024-12-07 11:41:48.641119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:49.395 [2024-12-07 11:41:48.641129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:49.395 [2024-12-07 11:41:48.641139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:49.395 [2024-12-07 11:41:48.641148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:49.395 [2024-12-07 11:41:48.641157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:49.395 [2024-12-07 11:41:48.641165] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:50.783 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2645211 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2645211 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2645211 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:51.728 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:51.728 rmmod nvme_tcp 00:29:51.728 rmmod nvme_fabrics 00:29:51.728 rmmod nvme_keyring 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2644875 ']' 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2644875 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2644875 ']' 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2644875 00:29:51.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2644875) - No such process 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2644875 is not found' 00:29:51.728 Process with pid 2644875 is not found 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.728 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:54.280 00:29:54.280 real 0m9.723s 00:29:54.280 user 0m26.414s 00:29:54.280 sys 0m1.669s 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:54.280 ************************************ 00:29:54.280 END TEST nvmf_shutdown_tc3 00:29:54.280 ************************************ 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:54.280 ************************************ 00:29:54.280 START TEST nvmf_shutdown_tc4 00:29:54.280 ************************************ 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.280 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:54.281 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:54.281 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:54.281 Found net devices under 0000:31:00.0: cvl_0_0 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:54.281 Found net devices under 0000:31:00.1: cvl_0_1 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:54.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:29:54.281 00:29:54.281 --- 10.0.0.2 ping statistics --- 00:29:54.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.281 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:29:54.281 00:29:54.281 --- 10.0.0.1 ping statistics --- 00:29:54.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.281 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2646964 00:29:54.281 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2646964 00:29:54.282 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:54.282 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2646964 ']' 00:29:54.282 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.282 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.282 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.282 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.282 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:54.543 [2024-12-07 11:41:53.696873] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:54.543 [2024-12-07 11:41:53.696987] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.543 [2024-12-07 11:41:53.853587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:54.805 [2024-12-07 11:41:53.938006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.805 [2024-12-07 11:41:53.938053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.805 [2024-12-07 11:41:53.938062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.805 [2024-12-07 11:41:53.938071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.805 [2024-12-07 11:41:53.938078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.805 [2024-12-07 11:41:53.939893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.805 [2024-12-07 11:41:53.940054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:54.805 [2024-12-07 11:41:53.940155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:54.805 [2024-12-07 11:41:53.940245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.377 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:55.377 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:55.378 [2024-12-07 11:41:54.509021] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.378 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:55.378 Malloc1 00:29:55.378 [2024-12-07 11:41:54.656910] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.378 Malloc2 00:29:55.639 Malloc3 00:29:55.639 Malloc4 00:29:55.639 Malloc5 00:29:55.639 Malloc6 00:29:55.903 Malloc7 00:29:55.904 Malloc8 00:29:55.904 Malloc9 00:29:55.904 Malloc10 00:29:56.165 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.165 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:56.165 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:56.165 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2647339 00:29:56.165 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:56.166 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:56.166 [2024-12-07 11:41:55.407302] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:01.458 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:01.458 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2646964 00:30:01.458 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2646964 ']' 00:30:01.458 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2646964 00:30:01.458 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:30:01.458 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.458 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2646964 00:30:01.458 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:01.458 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:01.458 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2646964' 00:30:01.458 killing process with pid 2646964 00:30:01.458 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2646964 00:30:01.458 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2646964 00:30:01.458 [2024-12-07 11:42:00.381112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:30:01.458 [2024-12-07 11:42:00.381167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:30:01.458 [2024-12-07 11:42:00.381178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:30:01.458 [2024-12-07 11:42:00.381186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:30:01.458 [2024-12-07 11:42:00.381193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:30:01.458 [2024-12-07 11:42:00.381955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:01.458 [2024-12-07 11:42:00.381982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:01.458 [2024-12-07 11:42:00.381990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:01.458 [2024-12-07 11:42:00.381997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:01.458 [2024-12-07 11:42:00.382005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:01.458 [2024-12-07 11:42:00.382019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:01.458 [2024-12-07 11:42:00.382039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:01.458 [2024-12-07 11:42:00.382046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:01.458 [2024-12-07 11:42:00.388351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.388384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(6) to be set 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 [2024-12-07 11:42:00.388992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 [2024-12-07 11:42:00.389033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.389042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.389049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.389055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.389062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.389068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same Write completed with error (sct=0, sc=8) 00:30:01.459 with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.389076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.389083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.389089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 [2024-12-07 11:42:00.389095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.389102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.389108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 [2024-12-07 11:42:00.389115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.389121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.389128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 [2024-12-07 11:42:00.389134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same starting I/O failed: -6 00:30:01.459 with the state(6) to be set 00:30:01.459 [2024-12-07 11:42:00.389148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 [2024-12-07 11:42:00.389543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.459 starting I/O failed: -6 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 [2024-12-07 11:42:00.391930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 starting I/O failed: -6 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.459 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 [2024-12-07 11:42:00.393917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.460 NVMe io qpair process completion error 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 [2024-12-07 11:42:00.395445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 [2024-12-07 11:42:00.396985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.460 starting I/O failed: -6 00:30:01.460 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 [2024-12-07 11:42:00.398870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 [2024-12-07 11:42:00.406154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.461 NVMe io qpair process completion error 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 [2024-12-07 11:42:00.407713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 Write completed with error (sct=0, sc=8) 00:30:01.461 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 [2024-12-07 11:42:00.409137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 [2024-12-07 11:42:00.411126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.462 Write completed with error (sct=0, sc=8) 00:30:01.462 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 [2024-12-07 11:42:00.420733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.463 NVMe io qpair process completion error 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 [2024-12-07 11:42:00.422333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 [2024-12-07 11:42:00.423729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.463 starting I/O failed: -6 00:30:01.463 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 [2024-12-07 11:42:00.425645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 [2024-12-07 11:42:00.437728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.464 NVMe io qpair process completion error 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 Write completed with error (sct=0, sc=8) 00:30:01.464 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 [2024-12-07 11:42:00.439262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.465 starting I/O failed: -6 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 [2024-12-07 11:42:00.440841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 [2024-12-07 11:42:00.442784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.465 Write completed with error (sct=0, sc=8) 00:30:01.465 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 [2024-12-07 11:42:00.450185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.466 NVMe io qpair process completion error 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 [2024-12-07 11:42:00.451977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 [2024-12-07 11:42:00.453351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.466 Write completed with error (sct=0, sc=8) 00:30:01.466 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 [2024-12-07 11:42:00.455279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 [2024-12-07 11:42:00.462411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.467 NVMe io qpair process completion error 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 Write completed with error (sct=0, sc=8) 00:30:01.467 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 [2024-12-07 11:42:00.464142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.468 starting I/O failed: -6 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 [2024-12-07 11:42:00.465714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 [2024-12-07 11:42:00.467639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.468 starting I/O failed: -6 00:30:01.468 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 [2024-12-07 11:42:00.477026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.469 NVMe io qpair process completion error 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 [2024-12-07 11:42:00.478447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 starting I/O failed: -6 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.469 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 [2024-12-07 11:42:00.479865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 [2024-12-07 11:42:00.481803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.470 starting I/O failed: -6 00:30:01.470 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 [2024-12-07 11:42:00.491365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.471 NVMe io qpair process completion error 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 [2024-12-07 11:42:00.493018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 [2024-12-07 11:42:00.494612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.471 Write completed with error (sct=0, sc=8) 00:30:01.471 starting I/O failed: -6 00:30:01.472 [2024-12-07 11:42:00.496563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 [2024-12-07 11:42:00.506269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.472 NVMe io qpair process completion error 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 [2024-12-07 11:42:00.507976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 Write completed with error (sct=0, sc=8) 00:30:01.472 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 [2024-12-07 11:42:00.509548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 starting I/O failed: -6 00:30:01.473 [2024-12-07 11:42:00.511415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.473 starting I/O failed: -6 00:30:01.473 starting I/O failed: -6 00:30:01.473 starting I/O failed: -6 00:30:01.473 starting I/O failed: -6 00:30:01.473 starting I/O failed: -6 00:30:01.473 starting I/O failed: -6 00:30:01.473 starting I/O failed: -6 00:30:01.473 starting I/O failed: -6 00:30:01.473 starting I/O failed: -6 00:30:01.473 starting I/O failed: -6 00:30:01.473 starting I/O failed: -6 00:30:01.473 NVMe io qpair process completion error 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.473 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Write completed with error (sct=0, sc=8) 00:30:01.474 Initializing NVMe Controllers 00:30:01.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:30:01.474 Controller IO queue size 128, less than required. 00:30:01.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:30:01.474 Controller IO queue size 128, less than required. 00:30:01.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.474 Controller IO queue size 128, less than required. 00:30:01.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:30:01.474 Controller IO queue size 128, less than required. 00:30:01.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:30:01.474 Controller IO queue size 128, less than required. 00:30:01.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:30:01.474 Controller IO queue size 128, less than required. 00:30:01.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:30:01.474 Controller IO queue size 128, less than required. 00:30:01.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:30:01.474 Controller IO queue size 128, less than required. 00:30:01.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:30:01.474 Controller IO queue size 128, less than required. 00:30:01.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:30:01.474 Controller IO queue size 128, less than required. 00:30:01.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:01.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:01.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:01.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:01.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:01.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:01.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:01.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:01.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:01.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:01.474 Initialization complete. Launching workers. 00:30:01.474 ======================================================== 00:30:01.474 Latency(us) 00:30:01.474 Device Information : IOPS MiB/s Average min max 00:30:01.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1672.33 71.86 76563.14 1172.16 164767.75 00:30:01.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1678.63 72.13 76369.93 1351.40 196209.79 00:30:01.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1689.48 72.59 75988.01 932.97 153200.02 00:30:01.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1677.97 72.10 76617.20 1139.92 164273.87 00:30:01.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1673.63 71.91 76947.40 878.79 175796.30 00:30:01.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1676.45 72.04 77010.73 1392.57 217168.49 00:30:01.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1698.81 73.00 74875.92 965.85 219587.73 00:30:01.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1664.30 71.51 75412.26 1258.26 143425.22 00:30:01.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1639.34 70.44 76655.36 1418.88 140497.81 00:30:01.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1684.49 72.38 74728.95 1248.54 137291.41 00:30:01.474 ======================================================== 00:30:01.474 Total : 16755.43 719.96 76113.62 878.79 219587.73 00:30:01.474 00:30:01.474 [2024-12-07 11:42:00.553371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000028080 is same with the state(6) to be set 00:30:01.474 [2024-12-07 11:42:00.553440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025d80 is same with the state(6) to be set 00:30:01.474 [2024-12-07 11:42:00.553485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025880 is same with the state(6) to be set 00:30:01.474 [2024-12-07 11:42:00.553525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027180 is same with the state(6) to be set 00:30:01.474 [2024-12-07 11:42:00.553567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027680 is same with the state(6) to be set 00:30:01.474 [2024-12-07 11:42:00.553607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000028580 is same with the state(6) to be set 00:30:01.474 [2024-12-07 11:42:00.553646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026280 is same with the state(6) to be set 00:30:01.474 [2024-12-07 11:42:00.553687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026c80 is same with the state(6) to be set 00:30:01.474 [2024-12-07 11:42:00.553728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027b80 is same with the state(6) to be set 00:30:01.474 [2024-12-07 11:42:00.553769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026780 is same with the state(6) to be set 00:30:01.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:02.858 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2647339 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2647339 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2647339 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:03.800 rmmod nvme_tcp 00:30:03.800 rmmod nvme_fabrics 00:30:03.800 rmmod nvme_keyring 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2646964 ']' 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2646964 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2646964 ']' 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2646964 00:30:03.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2646964) - No such process 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2646964 is not found' 00:30:03.800 Process with pid 2646964 is not found 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.800 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.765 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:05.765 00:30:05.765 real 0m11.818s 00:30:05.765 user 0m33.141s 00:30:05.765 sys 0m3.960s 00:30:05.765 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.765 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:05.765 ************************************ 00:30:05.765 END TEST nvmf_shutdown_tc4 00:30:05.765 ************************************ 00:30:05.765 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:05.765 00:30:05.765 real 0m52.315s 00:30:05.765 user 2m19.446s 00:30:05.765 sys 0m14.666s 00:30:05.765 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.765 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:05.765 ************************************ 00:30:05.765 END TEST nvmf_shutdown 00:30:05.765 ************************************ 00:30:05.765 11:42:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:05.765 11:42:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:05.765 11:42:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:05.765 11:42:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:06.025 ************************************ 00:30:06.025 START TEST nvmf_nsid 00:30:06.025 ************************************ 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:06.025 * Looking for test storage... 00:30:06.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:06.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.025 --rc genhtml_branch_coverage=1 00:30:06.025 --rc genhtml_function_coverage=1 00:30:06.025 --rc genhtml_legend=1 00:30:06.025 --rc geninfo_all_blocks=1 00:30:06.025 --rc geninfo_unexecuted_blocks=1 00:30:06.025 00:30:06.025 ' 00:30:06.025 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:06.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.025 --rc genhtml_branch_coverage=1 00:30:06.026 --rc genhtml_function_coverage=1 00:30:06.026 --rc genhtml_legend=1 00:30:06.026 --rc geninfo_all_blocks=1 00:30:06.026 --rc geninfo_unexecuted_blocks=1 00:30:06.026 00:30:06.026 ' 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:06.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.026 --rc genhtml_branch_coverage=1 00:30:06.026 --rc genhtml_function_coverage=1 00:30:06.026 --rc genhtml_legend=1 00:30:06.026 --rc geninfo_all_blocks=1 00:30:06.026 --rc geninfo_unexecuted_blocks=1 00:30:06.026 00:30:06.026 ' 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:06.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.026 --rc genhtml_branch_coverage=1 00:30:06.026 --rc genhtml_function_coverage=1 00:30:06.026 --rc genhtml_legend=1 00:30:06.026 --rc geninfo_all_blocks=1 00:30:06.026 --rc geninfo_unexecuted_blocks=1 00:30:06.026 00:30:06.026 ' 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:06.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.026 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.286 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:14.456 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:14.456 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:14.456 Found net devices under 0000:31:00.0: cvl_0_0 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:14.456 Found net devices under 0000:31:00.1: cvl_0_1 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.456 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.457 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.457 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.457 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.457 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.457 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.457 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.457 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.457 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:30:14.457 00:30:14.457 --- 10.0.0.2 ping statistics --- 00:30:14.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.457 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:30:14.457 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:30:14.457 00:30:14.457 --- 10.0.0.1 ping statistics --- 00:30:14.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.457 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2653527 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2653527 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2653527 ']' 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.457 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:14.457 [2024-12-07 11:42:13.154410] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:14.457 [2024-12-07 11:42:13.154543] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.457 [2024-12-07 11:42:13.303501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.457 [2024-12-07 11:42:13.401004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.457 [2024-12-07 11:42:13.401055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.457 [2024-12-07 11:42:13.401067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.457 [2024-12-07 11:42:13.401078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.457 [2024-12-07 11:42:13.401089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.457 [2024-12-07 11:42:13.402289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2653674 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=cbebcb81-bd62-4b0a-8acf-e0ad0bdd3b83 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=9158ab69-cc48-45ca-b004-28f735e967b7 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=b08163f0-25d2-4eaf-af7d-988de2007217 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.806 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:14.806 null0 00:30:14.806 null1 00:30:14.806 null2 00:30:14.806 [2024-12-07 11:42:14.013393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.807 [2024-12-07 11:42:14.037650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.807 [2024-12-07 11:42:14.040334] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:14.807 [2024-12-07 11:42:14.040454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2653674 ] 00:30:14.807 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.807 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2653674 /var/tmp/tgt2.sock 00:30:14.807 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2653674 ']' 00:30:14.807 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:14.807 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.807 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:14.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:14.807 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.807 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:15.087 [2024-12-07 11:42:14.183146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.087 [2024-12-07 11:42:14.281743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.660 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.660 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:15.660 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:15.921 [2024-12-07 11:42:15.205095] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.921 [2024-12-07 11:42:15.221269] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:15.921 nvme0n1 nvme0n2 00:30:15.921 nvme1n1 00:30:16.181 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:16.181 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:16.181 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:17.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:17.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:17.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:17.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:17.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:17.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:17.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:17.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:17.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:17.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:17.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:30:17.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:30:17.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid cbebcb81-bd62-4b0a-8acf-e0ad0bdd3b83 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=cbebcb81bd624b0a8acfe0ad0bdd3b83 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CBEBCB81BD624B0A8ACFE0AD0BDD3B83 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ CBEBCB81BD624B0A8ACFE0AD0BDD3B83 == \C\B\E\B\C\B\8\1\B\D\6\2\4\B\0\A\8\A\C\F\E\0\A\D\0\B\D\D\3\B\8\3 ]] 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 9158ab69-cc48-45ca-b004-28f735e967b7 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:18.509 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9158ab69cc4845cab00428f735e967b7 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9158AB69CC4845CAB00428F735E967B7 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 9158AB69CC4845CAB00428F735E967B7 == \9\1\5\8\A\B\6\9\C\C\4\8\4\5\C\A\B\0\0\4\2\8\F\7\3\5\E\9\6\7\B\7 ]] 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid b08163f0-25d2-4eaf-af7d-988de2007217 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b08163f025d24eafaf7d988de2007217 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B08163F025D24EAFAF7D988DE2007217 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ B08163F025D24EAFAF7D988DE2007217 == \B\0\8\1\6\3\F\0\2\5\D\2\4\E\A\F\A\F\7\D\9\8\8\D\E\2\0\0\7\2\1\7 ]] 00:30:18.770 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:19.030 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:19.030 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:19.030 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2653674 00:30:19.030 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2653674 ']' 00:30:19.030 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2653674 00:30:19.030 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:19.030 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:19.030 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2653674 00:30:19.030 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:19.030 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:19.030 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2653674' 00:30:19.030 killing process with pid 2653674 00:30:19.030 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2653674 00:30:19.030 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2653674 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.417 rmmod nvme_tcp 00:30:20.417 rmmod nvme_fabrics 00:30:20.417 rmmod nvme_keyring 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2653527 ']' 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2653527 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2653527 ']' 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2653527 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2653527 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2653527' 00:30:20.417 killing process with pid 2653527 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2653527 00:30:20.417 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2653527 00:30:21.361 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:21.361 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:21.361 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:21.361 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:21.361 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:21.361 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:21.361 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:21.361 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:21.361 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:21.361 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.361 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.361 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.282 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.282 00:30:23.282 real 0m17.321s 00:30:23.282 user 0m15.020s 00:30:23.282 sys 0m7.223s 00:30:23.282 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.282 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:23.282 ************************************ 00:30:23.282 END TEST nvmf_nsid 00:30:23.282 ************************************ 00:30:23.282 11:42:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:23.282 00:30:23.282 real 19m17.565s 00:30:23.282 user 49m53.977s 00:30:23.282 sys 4m31.990s 00:30:23.282 11:42:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.282 11:42:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:23.282 ************************************ 00:30:23.282 END TEST nvmf_target_extra 00:30:23.282 ************************************ 00:30:23.282 11:42:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:23.282 11:42:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:23.282 11:42:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.282 11:42:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:23.282 ************************************ 00:30:23.282 START TEST nvmf_host 00:30:23.282 ************************************ 00:30:23.282 11:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:23.543 * Looking for test storage... 00:30:23.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:23.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.543 --rc genhtml_branch_coverage=1 00:30:23.543 --rc genhtml_function_coverage=1 00:30:23.543 --rc genhtml_legend=1 00:30:23.543 --rc geninfo_all_blocks=1 00:30:23.543 --rc geninfo_unexecuted_blocks=1 00:30:23.543 00:30:23.543 ' 00:30:23.543 11:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:23.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.543 --rc genhtml_branch_coverage=1 00:30:23.543 --rc genhtml_function_coverage=1 00:30:23.543 --rc genhtml_legend=1 00:30:23.544 --rc geninfo_all_blocks=1 00:30:23.544 --rc geninfo_unexecuted_blocks=1 00:30:23.544 00:30:23.544 ' 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:23.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.544 --rc genhtml_branch_coverage=1 00:30:23.544 --rc genhtml_function_coverage=1 00:30:23.544 --rc genhtml_legend=1 00:30:23.544 --rc geninfo_all_blocks=1 00:30:23.544 --rc geninfo_unexecuted_blocks=1 00:30:23.544 00:30:23.544 ' 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:23.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.544 --rc genhtml_branch_coverage=1 00:30:23.544 --rc genhtml_function_coverage=1 00:30:23.544 --rc genhtml_legend=1 00:30:23.544 --rc geninfo_all_blocks=1 00:30:23.544 --rc geninfo_unexecuted_blocks=1 00:30:23.544 00:30:23.544 ' 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:23.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.544 ************************************ 00:30:23.544 START TEST nvmf_multicontroller 00:30:23.544 ************************************ 00:30:23.544 11:42:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:23.805 * Looking for test storage... 00:30:23.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:23.805 11:42:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:23.805 11:42:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:23.805 11:42:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:23.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.805 --rc genhtml_branch_coverage=1 00:30:23.805 --rc genhtml_function_coverage=1 00:30:23.805 --rc genhtml_legend=1 00:30:23.805 --rc geninfo_all_blocks=1 00:30:23.805 --rc geninfo_unexecuted_blocks=1 00:30:23.805 00:30:23.805 ' 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:23.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.805 --rc genhtml_branch_coverage=1 00:30:23.805 --rc genhtml_function_coverage=1 00:30:23.805 --rc genhtml_legend=1 00:30:23.805 --rc geninfo_all_blocks=1 00:30:23.805 --rc geninfo_unexecuted_blocks=1 00:30:23.805 00:30:23.805 ' 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:23.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.805 --rc genhtml_branch_coverage=1 00:30:23.805 --rc genhtml_function_coverage=1 00:30:23.805 --rc genhtml_legend=1 00:30:23.805 --rc geninfo_all_blocks=1 00:30:23.805 --rc geninfo_unexecuted_blocks=1 00:30:23.805 00:30:23.805 ' 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:23.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.805 --rc genhtml_branch_coverage=1 00:30:23.805 --rc genhtml_function_coverage=1 00:30:23.805 --rc genhtml_legend=1 00:30:23.805 --rc geninfo_all_blocks=1 00:30:23.805 --rc geninfo_unexecuted_blocks=1 00:30:23.805 00:30:23.805 ' 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.805 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:23.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.806 11:42:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:31.952 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:31.953 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:31.953 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:31.953 Found net devices under 0000:31:00.0: cvl_0_0 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:31.953 Found net devices under 0000:31:00.1: cvl_0_1 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:31.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:30:31.953 00:30:31.953 --- 10.0.0.2 ping statistics --- 00:30:31.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.953 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:30:31.953 00:30:31.953 --- 10.0.0.1 ping statistics --- 00:30:31.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.953 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2659199 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2659199 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2659199 ']' 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.953 11:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:31.953 [2024-12-07 11:42:30.607795] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:31.953 [2024-12-07 11:42:30.607907] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.953 [2024-12-07 11:42:30.760149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:31.954 [2024-12-07 11:42:30.865328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.954 [2024-12-07 11:42:30.865384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.954 [2024-12-07 11:42:30.865398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.954 [2024-12-07 11:42:30.865409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.954 [2024-12-07 11:42:30.865418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.954 [2024-12-07 11:42:30.867740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:31.954 [2024-12-07 11:42:30.867865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.954 [2024-12-07 11:42:30.867889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.215 [2024-12-07 11:42:31.476890] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.215 Malloc0 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.215 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.476 [2024-12-07 11:42:31.590056] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.476 [2024-12-07 11:42:31.602017] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.476 Malloc1 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2659548 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2659548 /var/tmp/bdevperf.sock 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2659548 ']' 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:32.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.476 11:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.421 NVMe0n1 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.421 1 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.421 request: 00:30:33.421 { 00:30:33.421 "name": "NVMe0", 00:30:33.421 "trtype": "tcp", 00:30:33.421 "traddr": "10.0.0.2", 00:30:33.421 "adrfam": "ipv4", 00:30:33.421 "trsvcid": "4420", 00:30:33.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:33.421 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:33.421 "hostaddr": "10.0.0.1", 00:30:33.421 "prchk_reftag": false, 00:30:33.421 "prchk_guard": false, 00:30:33.421 "hdgst": false, 00:30:33.421 "ddgst": false, 00:30:33.421 "allow_unrecognized_csi": false, 00:30:33.421 "method": "bdev_nvme_attach_controller", 00:30:33.421 "req_id": 1 00:30:33.421 } 00:30:33.421 Got JSON-RPC error response 00:30:33.421 response: 00:30:33.421 { 00:30:33.421 "code": -114, 00:30:33.421 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:33.421 } 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.421 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.421 request: 00:30:33.421 { 00:30:33.421 "name": "NVMe0", 00:30:33.421 "trtype": "tcp", 00:30:33.421 "traddr": "10.0.0.2", 00:30:33.421 "adrfam": "ipv4", 00:30:33.421 "trsvcid": "4420", 00:30:33.421 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:33.421 "hostaddr": "10.0.0.1", 00:30:33.421 "prchk_reftag": false, 00:30:33.421 "prchk_guard": false, 00:30:33.421 "hdgst": false, 00:30:33.421 "ddgst": false, 00:30:33.421 "allow_unrecognized_csi": false, 00:30:33.421 "method": "bdev_nvme_attach_controller", 00:30:33.421 "req_id": 1 00:30:33.421 } 00:30:33.421 Got JSON-RPC error response 00:30:33.421 response: 00:30:33.422 { 00:30:33.422 "code": -114, 00:30:33.422 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:33.422 } 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.422 request: 00:30:33.422 { 00:30:33.422 "name": "NVMe0", 00:30:33.422 "trtype": "tcp", 00:30:33.422 "traddr": "10.0.0.2", 00:30:33.422 "adrfam": "ipv4", 00:30:33.422 "trsvcid": "4420", 00:30:33.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:33.422 "hostaddr": "10.0.0.1", 00:30:33.422 "prchk_reftag": false, 00:30:33.422 "prchk_guard": false, 00:30:33.422 "hdgst": false, 00:30:33.422 "ddgst": false, 00:30:33.422 "multipath": "disable", 00:30:33.422 "allow_unrecognized_csi": false, 00:30:33.422 "method": "bdev_nvme_attach_controller", 00:30:33.422 "req_id": 1 00:30:33.422 } 00:30:33.422 Got JSON-RPC error response 00:30:33.422 response: 00:30:33.422 { 00:30:33.422 "code": -114, 00:30:33.422 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:33.422 } 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.422 request: 00:30:33.422 { 00:30:33.422 "name": "NVMe0", 00:30:33.422 "trtype": "tcp", 00:30:33.422 "traddr": "10.0.0.2", 00:30:33.422 "adrfam": "ipv4", 00:30:33.422 "trsvcid": "4420", 00:30:33.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:33.422 "hostaddr": "10.0.0.1", 00:30:33.422 "prchk_reftag": false, 00:30:33.422 "prchk_guard": false, 00:30:33.422 "hdgst": false, 00:30:33.422 "ddgst": false, 00:30:33.422 "multipath": "failover", 00:30:33.422 "allow_unrecognized_csi": false, 00:30:33.422 "method": "bdev_nvme_attach_controller", 00:30:33.422 "req_id": 1 00:30:33.422 } 00:30:33.422 Got JSON-RPC error response 00:30:33.422 response: 00:30:33.422 { 00:30:33.422 "code": -114, 00:30:33.422 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:33.422 } 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.422 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.688 NVMe0n1 00:30:33.688 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.688 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:33.688 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.688 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.688 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.688 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:33.688 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.688 11:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.948 00:30:33.948 11:42:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.948 11:42:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:33.948 11:42:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:33.948 11:42:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.948 11:42:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.948 11:42:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.948 11:42:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:33.948 11:42:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:35.351 { 00:30:35.351 "results": [ 00:30:35.351 { 00:30:35.351 "job": "NVMe0n1", 00:30:35.351 "core_mask": "0x1", 00:30:35.351 "workload": "write", 00:30:35.351 "status": "finished", 00:30:35.351 "queue_depth": 128, 00:30:35.351 "io_size": 4096, 00:30:35.351 "runtime": 1.006532, 00:30:35.351 "iops": 21852.26103094586, 00:30:35.351 "mibps": 85.36039465213227, 00:30:35.351 "io_failed": 0, 00:30:35.351 "io_timeout": 0, 00:30:35.351 "avg_latency_us": 5843.317224217625, 00:30:35.351 "min_latency_us": 2143.5733333333333, 00:30:35.351 "max_latency_us": 12124.16 00:30:35.351 } 00:30:35.351 ], 00:30:35.351 "core_count": 1 00:30:35.351 } 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2659548 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2659548 ']' 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2659548 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2659548 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2659548' 00:30:35.351 killing process with pid 2659548 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2659548 00:30:35.351 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2659548 00:30:35.923 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:35.923 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.923 11:42:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:30:35.923 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:35.923 [2024-12-07 11:42:31.790463] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:35.923 [2024-12-07 11:42:31.790576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659548 ] 00:30:35.923 [2024-12-07 11:42:31.916192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.923 [2024-12-07 11:42:32.013780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.923 [2024-12-07 11:42:33.142443] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 9b21c214-f09e-4e91-b571-4991eefc66e4 already exists 00:30:35.923 [2024-12-07 11:42:33.142487] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:9b21c214-f09e-4e91-b571-4991eefc66e4 alias for bdev NVMe1n1 00:30:35.923 [2024-12-07 11:42:33.142503] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:35.923 Running I/O for 1 seconds... 00:30:35.923 21804.00 IOPS, 85.17 MiB/s 00:30:35.923 Latency(us) 00:30:35.923 [2024-12-07T10:42:35.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.923 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:35.923 NVMe0n1 : 1.01 21852.26 85.36 0.00 0.00 5843.32 2143.57 12124.16 00:30:35.923 [2024-12-07T10:42:35.277Z] =================================================================================================================== 00:30:35.923 [2024-12-07T10:42:35.277Z] Total : 21852.26 85.36 0.00 0.00 5843.32 2143.57 12124.16 00:30:35.923 Received shutdown signal, test time was about 1.000000 seconds 00:30:35.923 00:30:35.923 Latency(us) 00:30:35.923 [2024-12-07T10:42:35.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.923 [2024-12-07T10:42:35.277Z] =================================================================================================================== 00:30:35.923 [2024-12-07T10:42:35.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:35.923 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:35.923 rmmod nvme_tcp 00:30:35.923 rmmod nvme_fabrics 00:30:35.923 rmmod nvme_keyring 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2659199 ']' 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2659199 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2659199 ']' 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2659199 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2659199 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2659199' 00:30:35.923 killing process with pid 2659199 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2659199 00:30:35.923 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2659199 00:30:36.865 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:36.865 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:36.865 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:36.865 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:36.865 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:36.865 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:36.865 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:36.865 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:36.865 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:36.865 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.865 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.865 11:42:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.778 11:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.778 00:30:38.778 real 0m15.142s 00:30:38.778 user 0m20.548s 00:30:38.778 sys 0m6.602s 00:30:38.778 11:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.778 11:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.778 ************************************ 00:30:38.778 END TEST nvmf_multicontroller 00:30:38.778 ************************************ 00:30:38.778 11:42:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:38.778 11:42:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:38.778 11:42:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.779 11:42:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.779 ************************************ 00:30:38.779 START TEST nvmf_aer 00:30:38.779 ************************************ 00:30:38.779 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:39.040 * Looking for test storage... 00:30:39.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.040 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:39.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.041 --rc genhtml_branch_coverage=1 00:30:39.041 --rc genhtml_function_coverage=1 00:30:39.041 --rc genhtml_legend=1 00:30:39.041 --rc geninfo_all_blocks=1 00:30:39.041 --rc geninfo_unexecuted_blocks=1 00:30:39.041 00:30:39.041 ' 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:39.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.041 --rc genhtml_branch_coverage=1 00:30:39.041 --rc genhtml_function_coverage=1 00:30:39.041 --rc genhtml_legend=1 00:30:39.041 --rc geninfo_all_blocks=1 00:30:39.041 --rc geninfo_unexecuted_blocks=1 00:30:39.041 00:30:39.041 ' 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:39.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.041 --rc genhtml_branch_coverage=1 00:30:39.041 --rc genhtml_function_coverage=1 00:30:39.041 --rc genhtml_legend=1 00:30:39.041 --rc geninfo_all_blocks=1 00:30:39.041 --rc geninfo_unexecuted_blocks=1 00:30:39.041 00:30:39.041 ' 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:39.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.041 --rc genhtml_branch_coverage=1 00:30:39.041 --rc genhtml_function_coverage=1 00:30:39.041 --rc genhtml_legend=1 00:30:39.041 --rc geninfo_all_blocks=1 00:30:39.041 --rc geninfo_unexecuted_blocks=1 00:30:39.041 00:30:39.041 ' 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:39.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:39.041 11:42:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:47.201 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:47.201 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:47.201 Found net devices under 0000:31:00.0: cvl_0_0 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:47.201 Found net devices under 0000:31:00.1: cvl_0_1 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:47.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:47.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:30:47.201 00:30:47.201 --- 10.0.0.2 ping statistics --- 00:30:47.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.201 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:47.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:47.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:30:47.201 00:30:47.201 --- 10.0.0.1 ping statistics --- 00:30:47.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.201 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2664564 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2664564 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2664564 ']' 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:47.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:47.201 11:42:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.201 [2024-12-07 11:42:45.742431] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:47.201 [2024-12-07 11:42:45.742560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:47.201 [2024-12-07 11:42:45.895446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:47.201 [2024-12-07 11:42:45.997875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:47.201 [2024-12-07 11:42:45.997918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:47.201 [2024-12-07 11:42:45.997931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:47.201 [2024-12-07 11:42:45.997942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:47.201 [2024-12-07 11:42:45.997951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:47.201 [2024-12-07 11:42:46.000163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.201 [2024-12-07 11:42:46.000250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:47.201 [2024-12-07 11:42:46.000367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.201 [2024-12-07 11:42:46.000391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:47.201 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.201 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:47.201 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:47.201 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:47.201 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.464 [2024-12-07 11:42:46.566833] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.464 Malloc0 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.464 [2024-12-07 11:42:46.675448] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.464 [ 00:30:47.464 { 00:30:47.464 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:47.464 "subtype": "Discovery", 00:30:47.464 "listen_addresses": [], 00:30:47.464 "allow_any_host": true, 00:30:47.464 "hosts": [] 00:30:47.464 }, 00:30:47.464 { 00:30:47.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:47.464 "subtype": "NVMe", 00:30:47.464 "listen_addresses": [ 00:30:47.464 { 00:30:47.464 "trtype": "TCP", 00:30:47.464 "adrfam": "IPv4", 00:30:47.464 "traddr": "10.0.0.2", 00:30:47.464 "trsvcid": "4420" 00:30:47.464 } 00:30:47.464 ], 00:30:47.464 "allow_any_host": true, 00:30:47.464 "hosts": [], 00:30:47.464 "serial_number": "SPDK00000000000001", 00:30:47.464 "model_number": "SPDK bdev Controller", 00:30:47.464 "max_namespaces": 2, 00:30:47.464 "min_cntlid": 1, 00:30:47.464 "max_cntlid": 65519, 00:30:47.464 "namespaces": [ 00:30:47.464 { 00:30:47.464 "nsid": 1, 00:30:47.464 "bdev_name": "Malloc0", 00:30:47.464 "name": "Malloc0", 00:30:47.464 "nguid": "DFA599A8EB104A96A6EAEEC24609310C", 00:30:47.464 "uuid": "dfa599a8-eb10-4a96-a6ea-eec24609310c" 00:30:47.464 } 00:30:47.464 ] 00:30:47.464 } 00:30:47.464 ] 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2664658 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:47.464 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:47.725 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:47.725 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:47.725 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:47.725 11:42:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:47.725 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:47.725 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:47.725 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:47.725 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:47.725 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.725 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.986 Malloc1 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.986 [ 00:30:47.986 { 00:30:47.986 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:47.986 "subtype": "Discovery", 00:30:47.986 "listen_addresses": [], 00:30:47.986 "allow_any_host": true, 00:30:47.986 "hosts": [] 00:30:47.986 }, 00:30:47.986 { 00:30:47.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:47.986 "subtype": "NVMe", 00:30:47.986 "listen_addresses": [ 00:30:47.986 { 00:30:47.986 "trtype": "TCP", 00:30:47.986 "adrfam": "IPv4", 00:30:47.986 "traddr": "10.0.0.2", 00:30:47.986 "trsvcid": "4420" 00:30:47.986 } 00:30:47.986 ], 00:30:47.986 "allow_any_host": true, 00:30:47.986 "hosts": [], 00:30:47.986 "serial_number": "SPDK00000000000001", 00:30:47.986 "model_number": "SPDK bdev Controller", 00:30:47.986 "max_namespaces": 2, 00:30:47.986 "min_cntlid": 1, 00:30:47.986 "max_cntlid": 65519, 00:30:47.986 "namespaces": [ 00:30:47.986 { 00:30:47.986 "nsid": 1, 00:30:47.986 "bdev_name": "Malloc0", 00:30:47.986 "name": "Malloc0", 00:30:47.986 "nguid": "DFA599A8EB104A96A6EAEEC24609310C", 00:30:47.986 "uuid": "dfa599a8-eb10-4a96-a6ea-eec24609310c" 00:30:47.986 }, 00:30:47.986 { 00:30:47.986 "nsid": 2, 00:30:47.986 "bdev_name": "Malloc1", 00:30:47.986 "name": "Malloc1", 00:30:47.986 "nguid": "2D21CB97A2AD4EEF8BB32F0573FB0DFA", 00:30:47.986 "uuid": "2d21cb97-a2ad-4eef-8bb3-2f0573fb0dfa" 00:30:47.986 } 00:30:47.986 ] 00:30:47.986 } 00:30:47.986 ] 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2664658 00:30:47.986 Asynchronous Event Request test 00:30:47.986 Attaching to 10.0.0.2 00:30:47.986 Attached to 10.0.0.2 00:30:47.986 Registering asynchronous event callbacks... 00:30:47.986 Starting namespace attribute notice tests for all controllers... 00:30:47.986 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:47.986 aer_cb - Changed Namespace 00:30:47.986 Cleaning up... 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.986 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:48.248 rmmod nvme_tcp 00:30:48.248 rmmod nvme_fabrics 00:30:48.248 rmmod nvme_keyring 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2664564 ']' 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2664564 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2664564 ']' 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2664564 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:48.248 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2664564 00:30:48.508 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:48.508 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:48.508 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2664564' 00:30:48.508 killing process with pid 2664564 00:30:48.508 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2664564 00:30:48.508 11:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2664564 00:30:49.079 11:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:49.080 11:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:49.080 11:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:49.080 11:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:49.080 11:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:49.080 11:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:49.080 11:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:49.080 11:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:49.080 11:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:49.080 11:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.080 11:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.080 11:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:51.626 00:30:51.626 real 0m12.417s 00:30:51.626 user 0m11.081s 00:30:51.626 sys 0m6.124s 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.626 ************************************ 00:30:51.626 END TEST nvmf_aer 00:30:51.626 ************************************ 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.626 ************************************ 00:30:51.626 START TEST nvmf_async_init 00:30:51.626 ************************************ 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:51.626 * Looking for test storage... 00:30:51.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:51.626 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:51.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.627 --rc genhtml_branch_coverage=1 00:30:51.627 --rc genhtml_function_coverage=1 00:30:51.627 --rc genhtml_legend=1 00:30:51.627 --rc geninfo_all_blocks=1 00:30:51.627 --rc geninfo_unexecuted_blocks=1 00:30:51.627 00:30:51.627 ' 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:51.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.627 --rc genhtml_branch_coverage=1 00:30:51.627 --rc genhtml_function_coverage=1 00:30:51.627 --rc genhtml_legend=1 00:30:51.627 --rc geninfo_all_blocks=1 00:30:51.627 --rc geninfo_unexecuted_blocks=1 00:30:51.627 00:30:51.627 ' 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:51.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.627 --rc genhtml_branch_coverage=1 00:30:51.627 --rc genhtml_function_coverage=1 00:30:51.627 --rc genhtml_legend=1 00:30:51.627 --rc geninfo_all_blocks=1 00:30:51.627 --rc geninfo_unexecuted_blocks=1 00:30:51.627 00:30:51.627 ' 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:51.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.627 --rc genhtml_branch_coverage=1 00:30:51.627 --rc genhtml_function_coverage=1 00:30:51.627 --rc genhtml_legend=1 00:30:51.627 --rc geninfo_all_blocks=1 00:30:51.627 --rc geninfo_unexecuted_blocks=1 00:30:51.627 00:30:51.627 ' 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:51.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=69aeaf85052e4deaac6feacb018f265c 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:51.627 11:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:59.776 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.776 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:59.777 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:59.777 Found net devices under 0000:31:00.0: cvl_0_0 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:59.777 Found net devices under 0000:31:00.1: cvl_0_1 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:59.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:30:59.777 00:30:59.777 --- 10.0.0.2 ping statistics --- 00:30:59.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.777 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:30:59.777 00:30:59.777 --- 10.0.0.1 ping statistics --- 00:30:59.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.777 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2669343 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2669343 00:30:59.777 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:59.778 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2669343 ']' 00:30:59.778 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.778 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:59.778 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.778 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:59.778 11:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:59.778 [2024-12-07 11:42:58.483706] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:59.778 [2024-12-07 11:42:58.483845] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.778 [2024-12-07 11:42:58.632166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.778 [2024-12-07 11:42:58.729606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.778 [2024-12-07 11:42:58.729652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.778 [2024-12-07 11:42:58.729663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.778 [2024-12-07 11:42:58.729675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.778 [2024-12-07 11:42:58.729686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.778 [2024-12-07 11:42:58.730900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.039 [2024-12-07 11:42:59.285938] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.039 null0 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 69aeaf85052e4deaac6feacb018f265c 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.039 [2024-12-07 11:42:59.346267] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.039 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.301 nvme0n1 00:31:00.301 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.301 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:00.301 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.301 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.301 [ 00:31:00.301 { 00:31:00.301 "name": "nvme0n1", 00:31:00.301 "aliases": [ 00:31:00.301 "69aeaf85-052e-4dea-ac6f-eacb018f265c" 00:31:00.301 ], 00:31:00.301 "product_name": "NVMe disk", 00:31:00.301 "block_size": 512, 00:31:00.301 "num_blocks": 2097152, 00:31:00.301 "uuid": "69aeaf85-052e-4dea-ac6f-eacb018f265c", 00:31:00.301 "numa_id": 0, 00:31:00.301 "assigned_rate_limits": { 00:31:00.301 "rw_ios_per_sec": 0, 00:31:00.301 "rw_mbytes_per_sec": 0, 00:31:00.301 "r_mbytes_per_sec": 0, 00:31:00.301 "w_mbytes_per_sec": 0 00:31:00.301 }, 00:31:00.301 "claimed": false, 00:31:00.301 "zoned": false, 00:31:00.301 "supported_io_types": { 00:31:00.301 "read": true, 00:31:00.301 "write": true, 00:31:00.301 "unmap": false, 00:31:00.301 "flush": true, 00:31:00.301 "reset": true, 00:31:00.301 "nvme_admin": true, 00:31:00.301 "nvme_io": true, 00:31:00.301 "nvme_io_md": false, 00:31:00.301 "write_zeroes": true, 00:31:00.301 "zcopy": false, 00:31:00.301 "get_zone_info": false, 00:31:00.301 "zone_management": false, 00:31:00.301 "zone_append": false, 00:31:00.301 "compare": true, 00:31:00.301 "compare_and_write": true, 00:31:00.301 "abort": true, 00:31:00.301 "seek_hole": false, 00:31:00.301 "seek_data": false, 00:31:00.301 "copy": true, 00:31:00.301 "nvme_iov_md": false 00:31:00.301 }, 00:31:00.301 "memory_domains": [ 00:31:00.301 { 00:31:00.301 "dma_device_id": "system", 00:31:00.301 "dma_device_type": 1 00:31:00.301 } 00:31:00.301 ], 00:31:00.301 "driver_specific": { 00:31:00.301 "nvme": [ 00:31:00.301 { 00:31:00.301 "trid": { 00:31:00.301 "trtype": "TCP", 00:31:00.301 "adrfam": "IPv4", 00:31:00.301 "traddr": "10.0.0.2", 00:31:00.301 "trsvcid": "4420", 00:31:00.301 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:00.301 }, 00:31:00.301 "ctrlr_data": { 00:31:00.301 "cntlid": 1, 00:31:00.301 "vendor_id": "0x8086", 00:31:00.302 "model_number": "SPDK bdev Controller", 00:31:00.302 "serial_number": "00000000000000000000", 00:31:00.302 "firmware_revision": "25.01", 00:31:00.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.302 "oacs": { 00:31:00.302 "security": 0, 00:31:00.302 "format": 0, 00:31:00.302 "firmware": 0, 00:31:00.302 "ns_manage": 0 00:31:00.302 }, 00:31:00.302 "multi_ctrlr": true, 00:31:00.302 "ana_reporting": false 00:31:00.302 }, 00:31:00.302 "vs": { 00:31:00.302 "nvme_version": "1.3" 00:31:00.302 }, 00:31:00.302 "ns_data": { 00:31:00.302 "id": 1, 00:31:00.302 "can_share": true 00:31:00.302 } 00:31:00.302 } 00:31:00.302 ], 00:31:00.302 "mp_policy": "active_passive" 00:31:00.302 } 00:31:00.302 } 00:31:00.302 ] 00:31:00.302 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.302 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:00.302 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.302 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.302 [2024-12-07 11:42:59.621573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:00.302 [2024-12-07 11:42:59.621666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:31:00.564 [2024-12-07 11:42:59.754150] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.564 [ 00:31:00.564 { 00:31:00.564 "name": "nvme0n1", 00:31:00.564 "aliases": [ 00:31:00.564 "69aeaf85-052e-4dea-ac6f-eacb018f265c" 00:31:00.564 ], 00:31:00.564 "product_name": "NVMe disk", 00:31:00.564 "block_size": 512, 00:31:00.564 "num_blocks": 2097152, 00:31:00.564 "uuid": "69aeaf85-052e-4dea-ac6f-eacb018f265c", 00:31:00.564 "numa_id": 0, 00:31:00.564 "assigned_rate_limits": { 00:31:00.564 "rw_ios_per_sec": 0, 00:31:00.564 "rw_mbytes_per_sec": 0, 00:31:00.564 "r_mbytes_per_sec": 0, 00:31:00.564 "w_mbytes_per_sec": 0 00:31:00.564 }, 00:31:00.564 "claimed": false, 00:31:00.564 "zoned": false, 00:31:00.564 "supported_io_types": { 00:31:00.564 "read": true, 00:31:00.564 "write": true, 00:31:00.564 "unmap": false, 00:31:00.564 "flush": true, 00:31:00.564 "reset": true, 00:31:00.564 "nvme_admin": true, 00:31:00.564 "nvme_io": true, 00:31:00.564 "nvme_io_md": false, 00:31:00.564 "write_zeroes": true, 00:31:00.564 "zcopy": false, 00:31:00.564 "get_zone_info": false, 00:31:00.564 "zone_management": false, 00:31:00.564 "zone_append": false, 00:31:00.564 "compare": true, 00:31:00.564 "compare_and_write": true, 00:31:00.564 "abort": true, 00:31:00.564 "seek_hole": false, 00:31:00.564 "seek_data": false, 00:31:00.564 "copy": true, 00:31:00.564 "nvme_iov_md": false 00:31:00.564 }, 00:31:00.564 "memory_domains": [ 00:31:00.564 { 00:31:00.564 "dma_device_id": "system", 00:31:00.564 "dma_device_type": 1 00:31:00.564 } 00:31:00.564 ], 00:31:00.564 "driver_specific": { 00:31:00.564 "nvme": [ 00:31:00.564 { 00:31:00.564 "trid": { 00:31:00.564 "trtype": "TCP", 00:31:00.564 "adrfam": "IPv4", 00:31:00.564 "traddr": "10.0.0.2", 00:31:00.564 "trsvcid": "4420", 00:31:00.564 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:00.564 }, 00:31:00.564 "ctrlr_data": { 00:31:00.564 "cntlid": 2, 00:31:00.564 "vendor_id": "0x8086", 00:31:00.564 "model_number": "SPDK bdev Controller", 00:31:00.564 "serial_number": "00000000000000000000", 00:31:00.564 "firmware_revision": "25.01", 00:31:00.564 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.564 "oacs": { 00:31:00.564 "security": 0, 00:31:00.564 "format": 0, 00:31:00.564 "firmware": 0, 00:31:00.564 "ns_manage": 0 00:31:00.564 }, 00:31:00.564 "multi_ctrlr": true, 00:31:00.564 "ana_reporting": false 00:31:00.564 }, 00:31:00.564 "vs": { 00:31:00.564 "nvme_version": "1.3" 00:31:00.564 }, 00:31:00.564 "ns_data": { 00:31:00.564 "id": 1, 00:31:00.564 "can_share": true 00:31:00.564 } 00:31:00.564 } 00:31:00.564 ], 00:31:00.564 "mp_policy": "active_passive" 00:31:00.564 } 00:31:00.564 } 00:31:00.564 ] 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.qCKoEHHDNu 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.qCKoEHHDNu 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.qCKoEHHDNu 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.564 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.565 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:00.565 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.565 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.565 [2024-12-07 11:42:59.846320] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:00.565 [2024-12-07 11:42:59.846487] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:00.565 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.565 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:31:00.565 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.565 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.565 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.565 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:00.565 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.565 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.565 [2024-12-07 11:42:59.870405] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:00.830 nvme0n1 00:31:00.830 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.830 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:00.830 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.830 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.830 [ 00:31:00.830 { 00:31:00.830 "name": "nvme0n1", 00:31:00.830 "aliases": [ 00:31:00.830 "69aeaf85-052e-4dea-ac6f-eacb018f265c" 00:31:00.830 ], 00:31:00.830 "product_name": "NVMe disk", 00:31:00.830 "block_size": 512, 00:31:00.830 "num_blocks": 2097152, 00:31:00.830 "uuid": "69aeaf85-052e-4dea-ac6f-eacb018f265c", 00:31:00.830 "numa_id": 0, 00:31:00.830 "assigned_rate_limits": { 00:31:00.830 "rw_ios_per_sec": 0, 00:31:00.830 "rw_mbytes_per_sec": 0, 00:31:00.830 "r_mbytes_per_sec": 0, 00:31:00.830 "w_mbytes_per_sec": 0 00:31:00.830 }, 00:31:00.830 "claimed": false, 00:31:00.830 "zoned": false, 00:31:00.830 "supported_io_types": { 00:31:00.830 "read": true, 00:31:00.830 "write": true, 00:31:00.830 "unmap": false, 00:31:00.830 "flush": true, 00:31:00.830 "reset": true, 00:31:00.830 "nvme_admin": true, 00:31:00.830 "nvme_io": true, 00:31:00.830 "nvme_io_md": false, 00:31:00.830 "write_zeroes": true, 00:31:00.830 "zcopy": false, 00:31:00.830 "get_zone_info": false, 00:31:00.830 "zone_management": false, 00:31:00.830 "zone_append": false, 00:31:00.830 "compare": true, 00:31:00.830 "compare_and_write": true, 00:31:00.830 "abort": true, 00:31:00.830 "seek_hole": false, 00:31:00.830 "seek_data": false, 00:31:00.830 "copy": true, 00:31:00.830 "nvme_iov_md": false 00:31:00.830 }, 00:31:00.830 "memory_domains": [ 00:31:00.830 { 00:31:00.830 "dma_device_id": "system", 00:31:00.830 "dma_device_type": 1 00:31:00.830 } 00:31:00.830 ], 00:31:00.830 "driver_specific": { 00:31:00.830 "nvme": [ 00:31:00.830 { 00:31:00.830 "trid": { 00:31:00.830 "trtype": "TCP", 00:31:00.830 "adrfam": "IPv4", 00:31:00.830 "traddr": "10.0.0.2", 00:31:00.830 "trsvcid": "4421", 00:31:00.830 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:00.830 }, 00:31:00.830 "ctrlr_data": { 00:31:00.830 "cntlid": 3, 00:31:00.830 "vendor_id": "0x8086", 00:31:00.830 "model_number": "SPDK bdev Controller", 00:31:00.830 "serial_number": "00000000000000000000", 00:31:00.830 "firmware_revision": "25.01", 00:31:00.830 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.830 "oacs": { 00:31:00.830 "security": 0, 00:31:00.830 "format": 0, 00:31:00.830 "firmware": 0, 00:31:00.830 "ns_manage": 0 00:31:00.830 }, 00:31:00.830 "multi_ctrlr": true, 00:31:00.830 "ana_reporting": false 00:31:00.830 }, 00:31:00.830 "vs": { 00:31:00.830 "nvme_version": "1.3" 00:31:00.830 }, 00:31:00.830 "ns_data": { 00:31:00.830 "id": 1, 00:31:00.830 "can_share": true 00:31:00.830 } 00:31:00.830 } 00:31:00.830 ], 00:31:00.830 "mp_policy": "active_passive" 00:31:00.830 } 00:31:00.830 } 00:31:00.830 ] 00:31:00.830 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.830 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.830 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.830 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.830 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.831 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.qCKoEHHDNu 00:31:00.831 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:31:00.831 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:31:00.831 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:00.831 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:31:00.831 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:00.831 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:31:00.831 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:00.831 11:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:00.831 rmmod nvme_tcp 00:31:00.831 rmmod nvme_fabrics 00:31:00.831 rmmod nvme_keyring 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2669343 ']' 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2669343 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2669343 ']' 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2669343 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2669343 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2669343' 00:31:00.831 killing process with pid 2669343 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2669343 00:31:00.831 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2669343 00:31:01.779 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:01.779 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:01.779 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:01.779 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:31:01.779 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:31:01.779 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:01.779 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:31:01.779 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:01.779 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:01.780 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.780 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.780 11:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.691 11:43:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:03.691 00:31:03.691 real 0m12.447s 00:31:03.691 user 0m4.745s 00:31:03.691 sys 0m6.226s 00:31:03.691 11:43:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.691 11:43:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:03.691 ************************************ 00:31:03.691 END TEST nvmf_async_init 00:31:03.692 ************************************ 00:31:03.692 11:43:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:03.692 11:43:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:03.692 11:43:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:03.692 11:43:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.953 ************************************ 00:31:03.953 START TEST dma 00:31:03.953 ************************************ 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:03.953 * Looking for test storage... 00:31:03.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:31:03.953 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:03.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.954 --rc genhtml_branch_coverage=1 00:31:03.954 --rc genhtml_function_coverage=1 00:31:03.954 --rc genhtml_legend=1 00:31:03.954 --rc geninfo_all_blocks=1 00:31:03.954 --rc geninfo_unexecuted_blocks=1 00:31:03.954 00:31:03.954 ' 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:03.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.954 --rc genhtml_branch_coverage=1 00:31:03.954 --rc genhtml_function_coverage=1 00:31:03.954 --rc genhtml_legend=1 00:31:03.954 --rc geninfo_all_blocks=1 00:31:03.954 --rc geninfo_unexecuted_blocks=1 00:31:03.954 00:31:03.954 ' 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:03.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.954 --rc genhtml_branch_coverage=1 00:31:03.954 --rc genhtml_function_coverage=1 00:31:03.954 --rc genhtml_legend=1 00:31:03.954 --rc geninfo_all_blocks=1 00:31:03.954 --rc geninfo_unexecuted_blocks=1 00:31:03.954 00:31:03.954 ' 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:03.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.954 --rc genhtml_branch_coverage=1 00:31:03.954 --rc genhtml_function_coverage=1 00:31:03.954 --rc genhtml_legend=1 00:31:03.954 --rc geninfo_all_blocks=1 00:31:03.954 --rc geninfo_unexecuted_blocks=1 00:31:03.954 00:31:03.954 ' 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.954 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:31:04.216 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.216 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.216 11:43:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.216 11:43:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.216 11:43:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.216 11:43:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.216 11:43:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:04.216 11:43:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:04.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:31:04.217 00:31:04.217 real 0m0.244s 00:31:04.217 user 0m0.146s 00:31:04.217 sys 0m0.112s 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:04.217 ************************************ 00:31:04.217 END TEST dma 00:31:04.217 ************************************ 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.217 ************************************ 00:31:04.217 START TEST nvmf_identify 00:31:04.217 ************************************ 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:04.217 * Looking for test storage... 00:31:04.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:31:04.217 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:04.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.479 --rc genhtml_branch_coverage=1 00:31:04.479 --rc genhtml_function_coverage=1 00:31:04.479 --rc genhtml_legend=1 00:31:04.479 --rc geninfo_all_blocks=1 00:31:04.479 --rc geninfo_unexecuted_blocks=1 00:31:04.479 00:31:04.479 ' 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:04.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.479 --rc genhtml_branch_coverage=1 00:31:04.479 --rc genhtml_function_coverage=1 00:31:04.479 --rc genhtml_legend=1 00:31:04.479 --rc geninfo_all_blocks=1 00:31:04.479 --rc geninfo_unexecuted_blocks=1 00:31:04.479 00:31:04.479 ' 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:04.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.479 --rc genhtml_branch_coverage=1 00:31:04.479 --rc genhtml_function_coverage=1 00:31:04.479 --rc genhtml_legend=1 00:31:04.479 --rc geninfo_all_blocks=1 00:31:04.479 --rc geninfo_unexecuted_blocks=1 00:31:04.479 00:31:04.479 ' 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:04.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.479 --rc genhtml_branch_coverage=1 00:31:04.479 --rc genhtml_function_coverage=1 00:31:04.479 --rc genhtml_legend=1 00:31:04.479 --rc geninfo_all_blocks=1 00:31:04.479 --rc geninfo_unexecuted_blocks=1 00:31:04.479 00:31:04.479 ' 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.479 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:04.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:04.480 11:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.621 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:12.622 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:12.622 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:12.622 Found net devices under 0000:31:00.0: cvl_0_0 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:12.622 Found net devices under 0000:31:00.1: cvl_0_1 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.622 11:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:12.622 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.622 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.622 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.622 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:12.622 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:12.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:31:12.622 00:31:12.622 --- 10.0.0.2 ping statistics --- 00:31:12.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.622 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:31:12.622 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:31:12.623 00:31:12.623 --- 10.0.0.1 ping statistics --- 00:31:12.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.623 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2674176 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2674176 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2674176 ']' 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:12.623 11:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.623 [2024-12-07 11:43:11.287636] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:12.623 [2024-12-07 11:43:11.287767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:12.623 [2024-12-07 11:43:11.453550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:12.623 [2024-12-07 11:43:11.555004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.623 [2024-12-07 11:43:11.555053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.623 [2024-12-07 11:43:11.555065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:12.623 [2024-12-07 11:43:11.555077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:12.623 [2024-12-07 11:43:11.555086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.623 [2024-12-07 11:43:11.557246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.623 [2024-12-07 11:43:11.557330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.623 [2024-12-07 11:43:11.557469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.623 [2024-12-07 11:43:11.557491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 [2024-12-07 11:43:12.059638] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 Malloc0 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 [2024-12-07 11:43:12.208599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.885 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:13.149 [ 00:31:13.149 { 00:31:13.149 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:13.149 "subtype": "Discovery", 00:31:13.149 "listen_addresses": [ 00:31:13.149 { 00:31:13.149 "trtype": "TCP", 00:31:13.149 "adrfam": "IPv4", 00:31:13.149 "traddr": "10.0.0.2", 00:31:13.149 "trsvcid": "4420" 00:31:13.149 } 00:31:13.149 ], 00:31:13.149 "allow_any_host": true, 00:31:13.149 "hosts": [] 00:31:13.149 }, 00:31:13.149 { 00:31:13.149 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.149 "subtype": "NVMe", 00:31:13.149 "listen_addresses": [ 00:31:13.149 { 00:31:13.149 "trtype": "TCP", 00:31:13.149 "adrfam": "IPv4", 00:31:13.149 "traddr": "10.0.0.2", 00:31:13.149 "trsvcid": "4420" 00:31:13.149 } 00:31:13.149 ], 00:31:13.149 "allow_any_host": true, 00:31:13.149 "hosts": [], 00:31:13.149 "serial_number": "SPDK00000000000001", 00:31:13.149 "model_number": "SPDK bdev Controller", 00:31:13.149 "max_namespaces": 32, 00:31:13.149 "min_cntlid": 1, 00:31:13.149 "max_cntlid": 65519, 00:31:13.149 "namespaces": [ 00:31:13.149 { 00:31:13.149 "nsid": 1, 00:31:13.149 "bdev_name": "Malloc0", 00:31:13.149 "name": "Malloc0", 00:31:13.149 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:13.149 "eui64": "ABCDEF0123456789", 00:31:13.149 "uuid": "cc590480-d8ac-4e4a-ac2e-07ea6e2a2d24" 00:31:13.149 } 00:31:13.149 ] 00:31:13.149 } 00:31:13.149 ] 00:31:13.149 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.149 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:13.149 [2024-12-07 11:43:12.291032] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:13.150 [2024-12-07 11:43:12.291106] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674227 ] 00:31:13.150 [2024-12-07 11:43:12.364311] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:31:13.150 [2024-12-07 11:43:12.364409] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:13.150 [2024-12-07 11:43:12.364422] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:13.150 [2024-12-07 11:43:12.364441] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:13.150 [2024-12-07 11:43:12.364462] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:13.150 [2024-12-07 11:43:12.365278] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:31:13.150 [2024-12-07 11:43:12.365328] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000025600 0 00:31:13.150 [2024-12-07 11:43:12.375034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:13.150 [2024-12-07 11:43:12.375057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:13.150 [2024-12-07 11:43:12.375069] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:13.150 [2024-12-07 11:43:12.375075] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:13.150 [2024-12-07 11:43:12.375131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.375144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.375154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.150 [2024-12-07 11:43:12.375176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:13.150 [2024-12-07 11:43:12.375203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.150 [2024-12-07 11:43:12.383031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.150 [2024-12-07 11:43:12.383053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.150 [2024-12-07 11:43:12.383061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.383070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.150 [2024-12-07 11:43:12.383087] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:13.150 [2024-12-07 11:43:12.383109] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:31:13.150 [2024-12-07 11:43:12.383119] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:31:13.150 [2024-12-07 11:43:12.383136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.383144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.383151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.150 [2024-12-07 11:43:12.383166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.150 [2024-12-07 11:43:12.383187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.150 [2024-12-07 11:43:12.383418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.150 [2024-12-07 11:43:12.383429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.150 [2024-12-07 11:43:12.383435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.383445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.150 [2024-12-07 11:43:12.383454] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:31:13.150 [2024-12-07 11:43:12.383466] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:31:13.150 [2024-12-07 11:43:12.383477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.383486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.383493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.150 [2024-12-07 11:43:12.383507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.150 [2024-12-07 11:43:12.383523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.150 [2024-12-07 11:43:12.383697] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.150 [2024-12-07 11:43:12.383706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.150 [2024-12-07 11:43:12.383714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.383720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.150 [2024-12-07 11:43:12.383729] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:31:13.150 [2024-12-07 11:43:12.383749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:13.150 [2024-12-07 11:43:12.383759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.383766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.383773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.150 [2024-12-07 11:43:12.383785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.150 [2024-12-07 11:43:12.383800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.150 [2024-12-07 11:43:12.383976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.150 [2024-12-07 11:43:12.383985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.150 [2024-12-07 11:43:12.383991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.383997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.150 [2024-12-07 11:43:12.384006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:13.150 [2024-12-07 11:43:12.384026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.384033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.384039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.150 [2024-12-07 11:43:12.384053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.150 [2024-12-07 11:43:12.384069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.150 [2024-12-07 11:43:12.384282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.150 [2024-12-07 11:43:12.384292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.150 [2024-12-07 11:43:12.384298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.384306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.150 [2024-12-07 11:43:12.384314] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:13.150 [2024-12-07 11:43:12.384323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:13.150 [2024-12-07 11:43:12.384334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:13.150 [2024-12-07 11:43:12.384443] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:31:13.150 [2024-12-07 11:43:12.384451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:13.150 [2024-12-07 11:43:12.384469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.150 [2024-12-07 11:43:12.384476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.384482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.151 [2024-12-07 11:43:12.384496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.151 [2024-12-07 11:43:12.384513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.151 [2024-12-07 11:43:12.384704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.151 [2024-12-07 11:43:12.384714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.151 [2024-12-07 11:43:12.384722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.384728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.151 [2024-12-07 11:43:12.384738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:13.151 [2024-12-07 11:43:12.384751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.384761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.384768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.151 [2024-12-07 11:43:12.384781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.151 [2024-12-07 11:43:12.384796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.151 [2024-12-07 11:43:12.384981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.151 [2024-12-07 11:43:12.384994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.151 [2024-12-07 11:43:12.385000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.385006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.151 [2024-12-07 11:43:12.385019] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:13.151 [2024-12-07 11:43:12.385027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:13.151 [2024-12-07 11:43:12.385042] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:31:13.151 [2024-12-07 11:43:12.385058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:13.151 [2024-12-07 11:43:12.385075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.385082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.151 [2024-12-07 11:43:12.385094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.151 [2024-12-07 11:43:12.385110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.151 [2024-12-07 11:43:12.385332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.151 [2024-12-07 11:43:12.385343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.151 [2024-12-07 11:43:12.385348] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.385356] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=0 00:31:13.151 [2024-12-07 11:43:12.385364] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:13.151 [2024-12-07 11:43:12.385376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.385398] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.385408] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.151 [2024-12-07 11:43:12.426227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.151 [2024-12-07 11:43:12.426233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426239] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.151 [2024-12-07 11:43:12.426257] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:31:13.151 [2024-12-07 11:43:12.426273] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:31:13.151 [2024-12-07 11:43:12.426284] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:31:13.151 [2024-12-07 11:43:12.426293] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:31:13.151 [2024-12-07 11:43:12.426301] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:31:13.151 [2024-12-07 11:43:12.426309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:31:13.151 [2024-12-07 11:43:12.426323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:13.151 [2024-12-07 11:43:12.426336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.151 [2024-12-07 11:43:12.426371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:13.151 [2024-12-07 11:43:12.426390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.151 [2024-12-07 11:43:12.426516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.151 [2024-12-07 11:43:12.426525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.151 [2024-12-07 11:43:12.426530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.151 [2024-12-07 11:43:12.426548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.151 [2024-12-07 11:43:12.426573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.151 [2024-12-07 11:43:12.426582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000025600) 00:31:13.151 [2024-12-07 11:43:12.426604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.151 [2024-12-07 11:43:12.426612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000025600) 00:31:13.151 [2024-12-07 11:43:12.426633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.151 [2024-12-07 11:43:12.426641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.151 [2024-12-07 11:43:12.426662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.151 [2024-12-07 11:43:12.426669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:13.151 [2024-12-07 11:43:12.426686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:13.151 [2024-12-07 11:43:12.426698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.151 [2024-12-07 11:43:12.426704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:13.151 [2024-12-07 11:43:12.426716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.151 [2024-12-07 11:43:12.426733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.151 [2024-12-07 11:43:12.426741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:13.151 [2024-12-07 11:43:12.426748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:13.152 [2024-12-07 11:43:12.426755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.152 [2024-12-07 11:43:12.426762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.152 [2024-12-07 11:43:12.427017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.152 [2024-12-07 11:43:12.427027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.152 [2024-12-07 11:43:12.427032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.427038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:13.152 [2024-12-07 11:43:12.427047] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:31:13.152 [2024-12-07 11:43:12.427058] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:31:13.152 [2024-12-07 11:43:12.427078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.427086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:13.152 [2024-12-07 11:43:12.427098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.152 [2024-12-07 11:43:12.427113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.152 [2024-12-07 11:43:12.427315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.152 [2024-12-07 11:43:12.427327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.152 [2024-12-07 11:43:12.427334] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.427341] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:31:13.152 [2024-12-07 11:43:12.427349] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:13.152 [2024-12-07 11:43:12.427356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.427382] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.427390] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.427512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.152 [2024-12-07 11:43:12.427521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.152 [2024-12-07 11:43:12.427527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.427534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:13.152 [2024-12-07 11:43:12.427558] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:31:13.152 [2024-12-07 11:43:12.427597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.427608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:13.152 [2024-12-07 11:43:12.427621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.152 [2024-12-07 11:43:12.427631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.427638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.427644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:13.152 [2024-12-07 11:43:12.427654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.152 [2024-12-07 11:43:12.427671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.152 [2024-12-07 11:43:12.427679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.152 [2024-12-07 11:43:12.427953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.152 [2024-12-07 11:43:12.427966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.152 [2024-12-07 11:43:12.427976] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.427983] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=1024, cccid=4 00:31:13.152 [2024-12-07 11:43:12.427991] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=1024 00:31:13.152 [2024-12-07 11:43:12.427998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.428007] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.428019] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.428028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.152 [2024-12-07 11:43:12.428036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.152 [2024-12-07 11:43:12.428042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.428048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:13.152 [2024-12-07 11:43:12.471025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.152 [2024-12-07 11:43:12.471045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.152 [2024-12-07 11:43:12.471051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.471064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:13.152 [2024-12-07 11:43:12.471091] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.471099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:13.152 [2024-12-07 11:43:12.471114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.152 [2024-12-07 11:43:12.471140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.152 [2024-12-07 11:43:12.471402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.152 [2024-12-07 11:43:12.471411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.152 [2024-12-07 11:43:12.471416] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.471422] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=3072, cccid=4 00:31:13.152 [2024-12-07 11:43:12.471429] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=3072 00:31:13.152 [2024-12-07 11:43:12.471436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.471453] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.152 [2024-12-07 11:43:12.471462] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.421 [2024-12-07 11:43:12.512209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.421 [2024-12-07 11:43:12.512229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.421 [2024-12-07 11:43:12.512235] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.421 [2024-12-07 11:43:12.512242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:13.421 [2024-12-07 11:43:12.512261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.421 [2024-12-07 11:43:12.512268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:13.421 [2024-12-07 11:43:12.512282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.421 [2024-12-07 11:43:12.512304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.421 [2024-12-07 11:43:12.512498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.421 [2024-12-07 11:43:12.512508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.421 [2024-12-07 11:43:12.512513] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.421 [2024-12-07 11:43:12.512519] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=8, cccid=4 00:31:13.421 [2024-12-07 11:43:12.512527] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=8 00:31:13.421 [2024-12-07 11:43:12.512533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.421 [2024-12-07 11:43:12.512545] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.422 [2024-12-07 11:43:12.512551] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.422 [2024-12-07 11:43:12.554203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.422 [2024-12-07 11:43:12.554222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.422 [2024-12-07 11:43:12.554228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.422 [2024-12-07 11:43:12.554234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:13.422 ===================================================== 00:31:13.422 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:13.422 ===================================================== 00:31:13.422 Controller Capabilities/Features 00:31:13.422 ================================ 00:31:13.422 Vendor ID: 0000 00:31:13.422 Subsystem Vendor ID: 0000 00:31:13.422 Serial Number: .................... 00:31:13.422 Model Number: ........................................ 00:31:13.422 Firmware Version: 25.01 00:31:13.422 Recommended Arb Burst: 0 00:31:13.422 IEEE OUI Identifier: 00 00 00 00:31:13.422 Multi-path I/O 00:31:13.422 May have multiple subsystem ports: No 00:31:13.422 May have multiple controllers: No 00:31:13.422 Associated with SR-IOV VF: No 00:31:13.422 Max Data Transfer Size: 131072 00:31:13.422 Max Number of Namespaces: 0 00:31:13.422 Max Number of I/O Queues: 1024 00:31:13.422 NVMe Specification Version (VS): 1.3 00:31:13.422 NVMe Specification Version (Identify): 1.3 00:31:13.422 Maximum Queue Entries: 128 00:31:13.422 Contiguous Queues Required: Yes 00:31:13.422 Arbitration Mechanisms Supported 00:31:13.422 Weighted Round Robin: Not Supported 00:31:13.422 Vendor Specific: Not Supported 00:31:13.422 Reset Timeout: 15000 ms 00:31:13.422 Doorbell Stride: 4 bytes 00:31:13.422 NVM Subsystem Reset: Not Supported 00:31:13.422 Command Sets Supported 00:31:13.422 NVM Command Set: Supported 00:31:13.422 Boot Partition: Not Supported 00:31:13.422 Memory Page Size Minimum: 4096 bytes 00:31:13.422 Memory Page Size Maximum: 4096 bytes 00:31:13.422 Persistent Memory Region: Not Supported 00:31:13.422 Optional Asynchronous Events Supported 00:31:13.422 Namespace Attribute Notices: Not Supported 00:31:13.422 Firmware Activation Notices: Not Supported 00:31:13.422 ANA Change Notices: Not Supported 00:31:13.422 PLE Aggregate Log Change Notices: Not Supported 00:31:13.422 LBA Status Info Alert Notices: Not Supported 00:31:13.422 EGE Aggregate Log Change Notices: Not Supported 00:31:13.422 Normal NVM Subsystem Shutdown event: Not Supported 00:31:13.422 Zone Descriptor Change Notices: Not Supported 00:31:13.422 Discovery Log Change Notices: Supported 00:31:13.422 Controller Attributes 00:31:13.422 128-bit Host Identifier: Not Supported 00:31:13.422 Non-Operational Permissive Mode: Not Supported 00:31:13.422 NVM Sets: Not Supported 00:31:13.422 Read Recovery Levels: Not Supported 00:31:13.422 Endurance Groups: Not Supported 00:31:13.422 Predictable Latency Mode: Not Supported 00:31:13.422 Traffic Based Keep ALive: Not Supported 00:31:13.422 Namespace Granularity: Not Supported 00:31:13.422 SQ Associations: Not Supported 00:31:13.422 UUID List: Not Supported 00:31:13.422 Multi-Domain Subsystem: Not Supported 00:31:13.422 Fixed Capacity Management: Not Supported 00:31:13.422 Variable Capacity Management: Not Supported 00:31:13.422 Delete Endurance Group: Not Supported 00:31:13.422 Delete NVM Set: Not Supported 00:31:13.422 Extended LBA Formats Supported: Not Supported 00:31:13.422 Flexible Data Placement Supported: Not Supported 00:31:13.422 00:31:13.422 Controller Memory Buffer Support 00:31:13.422 ================================ 00:31:13.422 Supported: No 00:31:13.422 00:31:13.422 Persistent Memory Region Support 00:31:13.422 ================================ 00:31:13.422 Supported: No 00:31:13.422 00:31:13.422 Admin Command Set Attributes 00:31:13.422 ============================ 00:31:13.422 Security Send/Receive: Not Supported 00:31:13.422 Format NVM: Not Supported 00:31:13.422 Firmware Activate/Download: Not Supported 00:31:13.422 Namespace Management: Not Supported 00:31:13.422 Device Self-Test: Not Supported 00:31:13.422 Directives: Not Supported 00:31:13.422 NVMe-MI: Not Supported 00:31:13.422 Virtualization Management: Not Supported 00:31:13.422 Doorbell Buffer Config: Not Supported 00:31:13.422 Get LBA Status Capability: Not Supported 00:31:13.422 Command & Feature Lockdown Capability: Not Supported 00:31:13.422 Abort Command Limit: 1 00:31:13.422 Async Event Request Limit: 4 00:31:13.422 Number of Firmware Slots: N/A 00:31:13.422 Firmware Slot 1 Read-Only: N/A 00:31:13.422 Firmware Activation Without Reset: N/A 00:31:13.422 Multiple Update Detection Support: N/A 00:31:13.422 Firmware Update Granularity: No Information Provided 00:31:13.422 Per-Namespace SMART Log: No 00:31:13.422 Asymmetric Namespace Access Log Page: Not Supported 00:31:13.422 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:13.422 Command Effects Log Page: Not Supported 00:31:13.422 Get Log Page Extended Data: Supported 00:31:13.422 Telemetry Log Pages: Not Supported 00:31:13.422 Persistent Event Log Pages: Not Supported 00:31:13.422 Supported Log Pages Log Page: May Support 00:31:13.422 Commands Supported & Effects Log Page: Not Supported 00:31:13.422 Feature Identifiers & Effects Log Page:May Support 00:31:13.422 NVMe-MI Commands & Effects Log Page: May Support 00:31:13.422 Data Area 4 for Telemetry Log: Not Supported 00:31:13.422 Error Log Page Entries Supported: 128 00:31:13.422 Keep Alive: Not Supported 00:31:13.422 00:31:13.422 NVM Command Set Attributes 00:31:13.422 ========================== 00:31:13.422 Submission Queue Entry Size 00:31:13.422 Max: 1 00:31:13.422 Min: 1 00:31:13.422 Completion Queue Entry Size 00:31:13.422 Max: 1 00:31:13.422 Min: 1 00:31:13.422 Number of Namespaces: 0 00:31:13.422 Compare Command: Not Supported 00:31:13.422 Write Uncorrectable Command: Not Supported 00:31:13.422 Dataset Management Command: Not Supported 00:31:13.422 Write Zeroes Command: Not Supported 00:31:13.422 Set Features Save Field: Not Supported 00:31:13.422 Reservations: Not Supported 00:31:13.422 Timestamp: Not Supported 00:31:13.422 Copy: Not Supported 00:31:13.422 Volatile Write Cache: Not Present 00:31:13.423 Atomic Write Unit (Normal): 1 00:31:13.423 Atomic Write Unit (PFail): 1 00:31:13.423 Atomic Compare & Write Unit: 1 00:31:13.423 Fused Compare & Write: Supported 00:31:13.423 Scatter-Gather List 00:31:13.423 SGL Command Set: Supported 00:31:13.423 SGL Keyed: Supported 00:31:13.423 SGL Bit Bucket Descriptor: Not Supported 00:31:13.423 SGL Metadata Pointer: Not Supported 00:31:13.423 Oversized SGL: Not Supported 00:31:13.423 SGL Metadata Address: Not Supported 00:31:13.423 SGL Offset: Supported 00:31:13.423 Transport SGL Data Block: Not Supported 00:31:13.423 Replay Protected Memory Block: Not Supported 00:31:13.423 00:31:13.423 Firmware Slot Information 00:31:13.423 ========================= 00:31:13.423 Active slot: 0 00:31:13.423 00:31:13.423 00:31:13.423 Error Log 00:31:13.423 ========= 00:31:13.423 00:31:13.423 Active Namespaces 00:31:13.423 ================= 00:31:13.423 Discovery Log Page 00:31:13.423 ================== 00:31:13.423 Generation Counter: 2 00:31:13.423 Number of Records: 2 00:31:13.423 Record Format: 0 00:31:13.423 00:31:13.423 Discovery Log Entry 0 00:31:13.423 ---------------------- 00:31:13.423 Transport Type: 3 (TCP) 00:31:13.423 Address Family: 1 (IPv4) 00:31:13.423 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:13.423 Entry Flags: 00:31:13.423 Duplicate Returned Information: 1 00:31:13.423 Explicit Persistent Connection Support for Discovery: 1 00:31:13.423 Transport Requirements: 00:31:13.423 Secure Channel: Not Required 00:31:13.423 Port ID: 0 (0x0000) 00:31:13.423 Controller ID: 65535 (0xffff) 00:31:13.423 Admin Max SQ Size: 128 00:31:13.423 Transport Service Identifier: 4420 00:31:13.423 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:13.423 Transport Address: 10.0.0.2 00:31:13.423 Discovery Log Entry 1 00:31:13.423 ---------------------- 00:31:13.423 Transport Type: 3 (TCP) 00:31:13.423 Address Family: 1 (IPv4) 00:31:13.423 Subsystem Type: 2 (NVM Subsystem) 00:31:13.423 Entry Flags: 00:31:13.423 Duplicate Returned Information: 0 00:31:13.423 Explicit Persistent Connection Support for Discovery: 0 00:31:13.423 Transport Requirements: 00:31:13.423 Secure Channel: Not Required 00:31:13.423 Port ID: 0 (0x0000) 00:31:13.423 Controller ID: 65535 (0xffff) 00:31:13.423 Admin Max SQ Size: 128 00:31:13.423 Transport Service Identifier: 4420 00:31:13.423 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:13.423 Transport Address: 10.0.0.2 [2024-12-07 11:43:12.554380] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:31:13.423 [2024-12-07 11:43:12.554396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.423 [2024-12-07 11:43:12.554409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.423 [2024-12-07 11:43:12.554418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000025600 00:31:13.423 [2024-12-07 11:43:12.554426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.423 [2024-12-07 11:43:12.554434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000025600 00:31:13.423 [2024-12-07 11:43:12.554442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.423 [2024-12-07 11:43:12.554449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.423 [2024-12-07 11:43:12.554457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.423 [2024-12-07 11:43:12.554469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.423 [2024-12-07 11:43:12.554477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.423 [2024-12-07 11:43:12.554486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.423 [2024-12-07 11:43:12.554499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.423 [2024-12-07 11:43:12.554520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.423 [2024-12-07 11:43:12.554746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.423 [2024-12-07 11:43:12.554756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.423 [2024-12-07 11:43:12.554762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.423 [2024-12-07 11:43:12.554769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.423 [2024-12-07 11:43:12.554781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.423 [2024-12-07 11:43:12.554788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.423 [2024-12-07 11:43:12.554797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.423 [2024-12-07 11:43:12.554809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.423 [2024-12-07 11:43:12.554828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.423 [2024-12-07 11:43:12.559026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.423 [2024-12-07 11:43:12.559042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.423 [2024-12-07 11:43:12.559048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.423 [2024-12-07 11:43:12.559055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.423 [2024-12-07 11:43:12.559064] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:31:13.423 [2024-12-07 11:43:12.559072] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:31:13.423 [2024-12-07 11:43:12.559088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.423 [2024-12-07 11:43:12.559096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.423 [2024-12-07 11:43:12.559102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.423 [2024-12-07 11:43:12.559115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.423 [2024-12-07 11:43:12.559139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.423 [2024-12-07 11:43:12.559312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.423 [2024-12-07 11:43:12.559322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.423 [2024-12-07 11:43:12.559327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.423 [2024-12-07 11:43:12.559333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.423 [2024-12-07 11:43:12.559346] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 0 milliseconds 00:31:13.424 00:31:13.424 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:13.424 [2024-12-07 11:43:12.657186] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:13.424 [2024-12-07 11:43:12.657274] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674384 ] 00:31:13.424 [2024-12-07 11:43:12.730265] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:31:13.424 [2024-12-07 11:43:12.730365] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:13.424 [2024-12-07 11:43:12.730379] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:13.424 [2024-12-07 11:43:12.730398] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:13.424 [2024-12-07 11:43:12.730418] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:13.424 [2024-12-07 11:43:12.734395] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:31:13.424 [2024-12-07 11:43:12.734447] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000025600 0 00:31:13.424 [2024-12-07 11:43:12.742036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:13.424 [2024-12-07 11:43:12.742059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:13.424 [2024-12-07 11:43:12.742067] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:13.424 [2024-12-07 11:43:12.742073] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:13.424 [2024-12-07 11:43:12.742124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.742136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.742146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.424 [2024-12-07 11:43:12.742166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:13.424 [2024-12-07 11:43:12.742193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.424 [2024-12-07 11:43:12.750031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.424 [2024-12-07 11:43:12.750051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.424 [2024-12-07 11:43:12.750057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.750066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.424 [2024-12-07 11:43:12.750086] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:13.424 [2024-12-07 11:43:12.750102] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:31:13.424 [2024-12-07 11:43:12.750112] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:31:13.424 [2024-12-07 11:43:12.750128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.750138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.750145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.424 [2024-12-07 11:43:12.750160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.424 [2024-12-07 11:43:12.750182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.424 [2024-12-07 11:43:12.750262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.424 [2024-12-07 11:43:12.750274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.424 [2024-12-07 11:43:12.750280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.750287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.424 [2024-12-07 11:43:12.750297] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:31:13.424 [2024-12-07 11:43:12.750310] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:31:13.424 [2024-12-07 11:43:12.750323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.750330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.750341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.424 [2024-12-07 11:43:12.750356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.424 [2024-12-07 11:43:12.750373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.424 [2024-12-07 11:43:12.750460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.424 [2024-12-07 11:43:12.750470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.424 [2024-12-07 11:43:12.750475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.750481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.424 [2024-12-07 11:43:12.750490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:31:13.424 [2024-12-07 11:43:12.750503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:13.424 [2024-12-07 11:43:12.750513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.750520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.750529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.424 [2024-12-07 11:43:12.750540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.424 [2024-12-07 11:43:12.750555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.424 [2024-12-07 11:43:12.750662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.424 [2024-12-07 11:43:12.750672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.424 [2024-12-07 11:43:12.750677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.750685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.424 [2024-12-07 11:43:12.750693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:13.424 [2024-12-07 11:43:12.750707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.750714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.750721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.424 [2024-12-07 11:43:12.750732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.424 [2024-12-07 11:43:12.750747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.424 [2024-12-07 11:43:12.750809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.424 [2024-12-07 11:43:12.750819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.424 [2024-12-07 11:43:12.750824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.424 [2024-12-07 11:43:12.750830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.424 [2024-12-07 11:43:12.750838] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:13.424 [2024-12-07 11:43:12.750847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:13.425 [2024-12-07 11:43:12.750860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:13.425 [2024-12-07 11:43:12.750969] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:31:13.425 [2024-12-07 11:43:12.750981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:13.425 [2024-12-07 11:43:12.750998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.425 [2024-12-07 11:43:12.751029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.425 [2024-12-07 11:43:12.751047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.425 [2024-12-07 11:43:12.751104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.425 [2024-12-07 11:43:12.751114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.425 [2024-12-07 11:43:12.751119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.425 [2024-12-07 11:43:12.751134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:13.425 [2024-12-07 11:43:12.751151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.425 [2024-12-07 11:43:12.751182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.425 [2024-12-07 11:43:12.751197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.425 [2024-12-07 11:43:12.751273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.425 [2024-12-07 11:43:12.751285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.425 [2024-12-07 11:43:12.751290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.425 [2024-12-07 11:43:12.751304] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:13.425 [2024-12-07 11:43:12.751312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:13.425 [2024-12-07 11:43:12.751323] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:31:13.425 [2024-12-07 11:43:12.751337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:13.425 [2024-12-07 11:43:12.751353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.425 [2024-12-07 11:43:12.751373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.425 [2024-12-07 11:43:12.751390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.425 [2024-12-07 11:43:12.751490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.425 [2024-12-07 11:43:12.751505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.425 [2024-12-07 11:43:12.751511] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751518] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=0 00:31:13.425 [2024-12-07 11:43:12.751526] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:13.425 [2024-12-07 11:43:12.751538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751553] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751560] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.425 [2024-12-07 11:43:12.751710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.425 [2024-12-07 11:43:12.751715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.425 [2024-12-07 11:43:12.751736] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:31:13.425 [2024-12-07 11:43:12.751745] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:31:13.425 [2024-12-07 11:43:12.751754] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:31:13.425 [2024-12-07 11:43:12.751764] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:31:13.425 [2024-12-07 11:43:12.751773] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:31:13.425 [2024-12-07 11:43:12.751783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:31:13.425 [2024-12-07 11:43:12.751798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:13.425 [2024-12-07 11:43:12.751810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.425 [2024-12-07 11:43:12.751837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:13.425 [2024-12-07 11:43:12.751853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.425 [2024-12-07 11:43:12.751929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.425 [2024-12-07 11:43:12.751939] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.425 [2024-12-07 11:43:12.751944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.425 [2024-12-07 11:43:12.751964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.751981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:13.425 [2024-12-07 11:43:12.751994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.425 [2024-12-07 11:43:12.752004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.752015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.752021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000025600) 00:31:13.425 [2024-12-07 11:43:12.752032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.425 [2024-12-07 11:43:12.752042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.752047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.752053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000025600) 00:31:13.425 [2024-12-07 11:43:12.752065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.425 [2024-12-07 11:43:12.752074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.752079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.425 [2024-12-07 11:43:12.752085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.425 [2024-12-07 11:43:12.752094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.426 [2024-12-07 11:43:12.752102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:13.426 [2024-12-07 11:43:12.752116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:13.426 [2024-12-07 11:43:12.752126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.752135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:13.426 [2024-12-07 11:43:12.752147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.426 [2024-12-07 11:43:12.752165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.426 [2024-12-07 11:43:12.752176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:13.426 [2024-12-07 11:43:12.752186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:13.426 [2024-12-07 11:43:12.752193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.426 [2024-12-07 11:43:12.752200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.426 [2024-12-07 11:43:12.752351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.426 [2024-12-07 11:43:12.752360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.426 [2024-12-07 11:43:12.752366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.752372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:13.426 [2024-12-07 11:43:12.752382] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:31:13.426 [2024-12-07 11:43:12.752392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:13.426 [2024-12-07 11:43:12.752405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:31:13.426 [2024-12-07 11:43:12.752415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:13.426 [2024-12-07 11:43:12.752424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.752431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.752437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:13.426 [2024-12-07 11:43:12.752449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:13.426 [2024-12-07 11:43:12.752463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.426 [2024-12-07 11:43:12.752526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.426 [2024-12-07 11:43:12.752536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.426 [2024-12-07 11:43:12.752541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.752548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:13.426 [2024-12-07 11:43:12.752633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:31:13.426 [2024-12-07 11:43:12.752650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:13.426 [2024-12-07 11:43:12.752664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.752670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:13.426 [2024-12-07 11:43:12.752684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.426 [2024-12-07 11:43:12.752699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.426 [2024-12-07 11:43:12.752827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.426 [2024-12-07 11:43:12.752847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.426 [2024-12-07 11:43:12.752853] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.752860] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:31:13.426 [2024-12-07 11:43:12.752867] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:13.426 [2024-12-07 11:43:12.752874] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.752884] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.752890] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.752903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.426 [2024-12-07 11:43:12.752914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.426 [2024-12-07 11:43:12.752919] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.752925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:13.426 [2024-12-07 11:43:12.752951] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:31:13.426 [2024-12-07 11:43:12.752966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:31:13.426 [2024-12-07 11:43:12.752980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:31:13.426 [2024-12-07 11:43:12.752993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.753000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:13.426 [2024-12-07 11:43:12.753019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.426 [2024-12-07 11:43:12.753035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.426 [2024-12-07 11:43:12.753161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.426 [2024-12-07 11:43:12.753176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.426 [2024-12-07 11:43:12.753182] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.753188] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:31:13.426 [2024-12-07 11:43:12.753195] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:13.426 [2024-12-07 11:43:12.753206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.753217] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.753225] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.753239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.426 [2024-12-07 11:43:12.753248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.426 [2024-12-07 11:43:12.753253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.753259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:13.426 [2024-12-07 11:43:12.753278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:13.426 [2024-12-07 11:43:12.753292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:13.426 [2024-12-07 11:43:12.753308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.426 [2024-12-07 11:43:12.753315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:13.426 [2024-12-07 11:43:12.753328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.426 [2024-12-07 11:43:12.753344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.426 [2024-12-07 11:43:12.753419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.426 [2024-12-07 11:43:12.753433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.426 [2024-12-07 11:43:12.753439] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.753445] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:31:13.427 [2024-12-07 11:43:12.753454] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:13.427 [2024-12-07 11:43:12.753460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.753470] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.753475] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.753548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.427 [2024-12-07 11:43:12.753558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.427 [2024-12-07 11:43:12.753564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.753570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:13.427 [2024-12-07 11:43:12.753586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:13.427 [2024-12-07 11:43:12.753598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:31:13.427 [2024-12-07 11:43:12.753609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:31:13.427 [2024-12-07 11:43:12.753619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:13.427 [2024-12-07 11:43:12.753627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:13.427 [2024-12-07 11:43:12.753636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:31:13.427 [2024-12-07 11:43:12.753644] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:31:13.427 [2024-12-07 11:43:12.753652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:31:13.427 [2024-12-07 11:43:12.753660] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:31:13.427 [2024-12-07 11:43:12.753692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.753703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:13.427 [2024-12-07 11:43:12.753715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.427 [2024-12-07 11:43:12.753725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.753731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.753738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:13.427 [2024-12-07 11:43:12.753748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.427 [2024-12-07 11:43:12.753765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.427 [2024-12-07 11:43:12.753779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.427 [2024-12-07 11:43:12.753856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.427 [2024-12-07 11:43:12.753867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.427 [2024-12-07 11:43:12.753872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.753881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:13.427 [2024-12-07 11:43:12.753892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.427 [2024-12-07 11:43:12.753902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.427 [2024-12-07 11:43:12.753907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.753913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:13.427 [2024-12-07 11:43:12.753926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.753932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:13.427 [2024-12-07 11:43:12.753943] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.427 [2024-12-07 11:43:12.753957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.427 [2024-12-07 11:43:12.758023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.427 [2024-12-07 11:43:12.758041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.427 [2024-12-07 11:43:12.758046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.758053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:13.427 [2024-12-07 11:43:12.758082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.758088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:13.427 [2024-12-07 11:43:12.758100] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.427 [2024-12-07 11:43:12.758123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.427 [2024-12-07 11:43:12.758199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.427 [2024-12-07 11:43:12.758211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.427 [2024-12-07 11:43:12.758216] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.758223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:13.427 [2024-12-07 11:43:12.758235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.758242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:13.427 [2024-12-07 11:43:12.758256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.427 [2024-12-07 11:43:12.758271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.427 [2024-12-07 11:43:12.758327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.427 [2024-12-07 11:43:12.758337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.427 [2024-12-07 11:43:12.758342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.758348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:13.427 [2024-12-07 11:43:12.758372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.758380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:13.427 [2024-12-07 11:43:12.758392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.427 [2024-12-07 11:43:12.758403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.758411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:13.427 [2024-12-07 11:43:12.758422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.427 [2024-12-07 11:43:12.758433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.427 [2024-12-07 11:43:12.758440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000025600) 00:31:13.427 [2024-12-07 11:43:12.758451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.427 [2024-12-07 11:43:12.758463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000025600) 00:31:13.428 [2024-12-07 11:43:12.758484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.428 [2024-12-07 11:43:12.758501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.428 [2024-12-07 11:43:12.758513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.428 [2024-12-07 11:43:12.758523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:31:13.428 [2024-12-07 11:43:12.758533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:13.428 [2024-12-07 11:43:12.758657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.428 [2024-12-07 11:43:12.758673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.428 [2024-12-07 11:43:12.758679] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758686] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=8192, cccid=5 00:31:13.428 [2024-12-07 11:43:12.758696] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000025600): expected_datao=0, payload_size=8192 00:31:13.428 [2024-12-07 11:43:12.758703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758801] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.428 [2024-12-07 11:43:12.758824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.428 [2024-12-07 11:43:12.758831] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758837] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=512, cccid=4 00:31:13.428 [2024-12-07 11:43:12.758844] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=512 00:31:13.428 [2024-12-07 11:43:12.758858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758868] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758874] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.428 [2024-12-07 11:43:12.758890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.428 [2024-12-07 11:43:12.758895] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758901] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=512, cccid=6 00:31:13.428 [2024-12-07 11:43:12.758907] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000025600): expected_datao=0, payload_size=512 00:31:13.428 [2024-12-07 11:43:12.758913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758922] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758929] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.428 [2024-12-07 11:43:12.758945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.428 [2024-12-07 11:43:12.758950] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758956] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=7 00:31:13.428 [2024-12-07 11:43:12.758963] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:13.428 [2024-12-07 11:43:12.758969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758984] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.758990] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.759003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.428 [2024-12-07 11:43:12.759018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.428 [2024-12-07 11:43:12.759024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.759031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:13.428 [2024-12-07 11:43:12.759054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.428 [2024-12-07 11:43:12.759068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.428 [2024-12-07 11:43:12.759073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.759079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:13.428 [2024-12-07 11:43:12.759092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.428 [2024-12-07 11:43:12.759100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.428 [2024-12-07 11:43:12.759105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.759111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000025600 00:31:13.428 [2024-12-07 11:43:12.759124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.428 [2024-12-07 11:43:12.759134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.428 [2024-12-07 11:43:12.759139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.428 [2024-12-07 11:43:12.759145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000025600 00:31:13.428 ===================================================== 00:31:13.428 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:13.428 ===================================================== 00:31:13.428 Controller Capabilities/Features 00:31:13.428 ================================ 00:31:13.428 Vendor ID: 8086 00:31:13.428 Subsystem Vendor ID: 8086 00:31:13.428 Serial Number: SPDK00000000000001 00:31:13.428 Model Number: SPDK bdev Controller 00:31:13.428 Firmware Version: 25.01 00:31:13.428 Recommended Arb Burst: 6 00:31:13.428 IEEE OUI Identifier: e4 d2 5c 00:31:13.428 Multi-path I/O 00:31:13.428 May have multiple subsystem ports: Yes 00:31:13.428 May have multiple controllers: Yes 00:31:13.428 Associated with SR-IOV VF: No 00:31:13.428 Max Data Transfer Size: 131072 00:31:13.428 Max Number of Namespaces: 32 00:31:13.428 Max Number of I/O Queues: 127 00:31:13.428 NVMe Specification Version (VS): 1.3 00:31:13.429 NVMe Specification Version (Identify): 1.3 00:31:13.429 Maximum Queue Entries: 128 00:31:13.429 Contiguous Queues Required: Yes 00:31:13.429 Arbitration Mechanisms Supported 00:31:13.429 Weighted Round Robin: Not Supported 00:31:13.429 Vendor Specific: Not Supported 00:31:13.429 Reset Timeout: 15000 ms 00:31:13.429 Doorbell Stride: 4 bytes 00:31:13.429 NVM Subsystem Reset: Not Supported 00:31:13.429 Command Sets Supported 00:31:13.429 NVM Command Set: Supported 00:31:13.429 Boot Partition: Not Supported 00:31:13.429 Memory Page Size Minimum: 4096 bytes 00:31:13.429 Memory Page Size Maximum: 4096 bytes 00:31:13.429 Persistent Memory Region: Not Supported 00:31:13.429 Optional Asynchronous Events Supported 00:31:13.429 Namespace Attribute Notices: Supported 00:31:13.429 Firmware Activation Notices: Not Supported 00:31:13.429 ANA Change Notices: Not Supported 00:31:13.429 PLE Aggregate Log Change Notices: Not Supported 00:31:13.429 LBA Status Info Alert Notices: Not Supported 00:31:13.429 EGE Aggregate Log Change Notices: Not Supported 00:31:13.429 Normal NVM Subsystem Shutdown event: Not Supported 00:31:13.429 Zone Descriptor Change Notices: Not Supported 00:31:13.429 Discovery Log Change Notices: Not Supported 00:31:13.429 Controller Attributes 00:31:13.429 128-bit Host Identifier: Supported 00:31:13.429 Non-Operational Permissive Mode: Not Supported 00:31:13.429 NVM Sets: Not Supported 00:31:13.429 Read Recovery Levels: Not Supported 00:31:13.429 Endurance Groups: Not Supported 00:31:13.429 Predictable Latency Mode: Not Supported 00:31:13.429 Traffic Based Keep ALive: Not Supported 00:31:13.429 Namespace Granularity: Not Supported 00:31:13.429 SQ Associations: Not Supported 00:31:13.429 UUID List: Not Supported 00:31:13.429 Multi-Domain Subsystem: Not Supported 00:31:13.429 Fixed Capacity Management: Not Supported 00:31:13.429 Variable Capacity Management: Not Supported 00:31:13.429 Delete Endurance Group: Not Supported 00:31:13.429 Delete NVM Set: Not Supported 00:31:13.429 Extended LBA Formats Supported: Not Supported 00:31:13.429 Flexible Data Placement Supported: Not Supported 00:31:13.429 00:31:13.429 Controller Memory Buffer Support 00:31:13.429 ================================ 00:31:13.429 Supported: No 00:31:13.429 00:31:13.429 Persistent Memory Region Support 00:31:13.429 ================================ 00:31:13.429 Supported: No 00:31:13.429 00:31:13.429 Admin Command Set Attributes 00:31:13.429 ============================ 00:31:13.429 Security Send/Receive: Not Supported 00:31:13.429 Format NVM: Not Supported 00:31:13.429 Firmware Activate/Download: Not Supported 00:31:13.429 Namespace Management: Not Supported 00:31:13.429 Device Self-Test: Not Supported 00:31:13.429 Directives: Not Supported 00:31:13.429 NVMe-MI: Not Supported 00:31:13.429 Virtualization Management: Not Supported 00:31:13.429 Doorbell Buffer Config: Not Supported 00:31:13.429 Get LBA Status Capability: Not Supported 00:31:13.429 Command & Feature Lockdown Capability: Not Supported 00:31:13.429 Abort Command Limit: 4 00:31:13.429 Async Event Request Limit: 4 00:31:13.429 Number of Firmware Slots: N/A 00:31:13.429 Firmware Slot 1 Read-Only: N/A 00:31:13.429 Firmware Activation Without Reset: N/A 00:31:13.429 Multiple Update Detection Support: N/A 00:31:13.429 Firmware Update Granularity: No Information Provided 00:31:13.429 Per-Namespace SMART Log: No 00:31:13.429 Asymmetric Namespace Access Log Page: Not Supported 00:31:13.429 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:13.429 Command Effects Log Page: Supported 00:31:13.429 Get Log Page Extended Data: Supported 00:31:13.429 Telemetry Log Pages: Not Supported 00:31:13.429 Persistent Event Log Pages: Not Supported 00:31:13.429 Supported Log Pages Log Page: May Support 00:31:13.429 Commands Supported & Effects Log Page: Not Supported 00:31:13.429 Feature Identifiers & Effects Log Page:May Support 00:31:13.429 NVMe-MI Commands & Effects Log Page: May Support 00:31:13.429 Data Area 4 for Telemetry Log: Not Supported 00:31:13.429 Error Log Page Entries Supported: 128 00:31:13.429 Keep Alive: Supported 00:31:13.429 Keep Alive Granularity: 10000 ms 00:31:13.429 00:31:13.429 NVM Command Set Attributes 00:31:13.429 ========================== 00:31:13.429 Submission Queue Entry Size 00:31:13.429 Max: 64 00:31:13.429 Min: 64 00:31:13.429 Completion Queue Entry Size 00:31:13.429 Max: 16 00:31:13.429 Min: 16 00:31:13.429 Number of Namespaces: 32 00:31:13.429 Compare Command: Supported 00:31:13.429 Write Uncorrectable Command: Not Supported 00:31:13.429 Dataset Management Command: Supported 00:31:13.429 Write Zeroes Command: Supported 00:31:13.429 Set Features Save Field: Not Supported 00:31:13.429 Reservations: Supported 00:31:13.429 Timestamp: Not Supported 00:31:13.429 Copy: Supported 00:31:13.429 Volatile Write Cache: Present 00:31:13.429 Atomic Write Unit (Normal): 1 00:31:13.429 Atomic Write Unit (PFail): 1 00:31:13.429 Atomic Compare & Write Unit: 1 00:31:13.429 Fused Compare & Write: Supported 00:31:13.429 Scatter-Gather List 00:31:13.429 SGL Command Set: Supported 00:31:13.429 SGL Keyed: Supported 00:31:13.429 SGL Bit Bucket Descriptor: Not Supported 00:31:13.429 SGL Metadata Pointer: Not Supported 00:31:13.429 Oversized SGL: Not Supported 00:31:13.429 SGL Metadata Address: Not Supported 00:31:13.429 SGL Offset: Supported 00:31:13.429 Transport SGL Data Block: Not Supported 00:31:13.429 Replay Protected Memory Block: Not Supported 00:31:13.429 00:31:13.429 Firmware Slot Information 00:31:13.429 ========================= 00:31:13.429 Active slot: 1 00:31:13.429 Slot 1 Firmware Revision: 25.01 00:31:13.429 00:31:13.429 00:31:13.429 Commands Supported and Effects 00:31:13.429 ============================== 00:31:13.429 Admin Commands 00:31:13.429 -------------- 00:31:13.429 Get Log Page (02h): Supported 00:31:13.429 Identify (06h): Supported 00:31:13.429 Abort (08h): Supported 00:31:13.429 Set Features (09h): Supported 00:31:13.429 Get Features (0Ah): Supported 00:31:13.430 Asynchronous Event Request (0Ch): Supported 00:31:13.430 Keep Alive (18h): Supported 00:31:13.430 I/O Commands 00:31:13.430 ------------ 00:31:13.430 Flush (00h): Supported LBA-Change 00:31:13.430 Write (01h): Supported LBA-Change 00:31:13.430 Read (02h): Supported 00:31:13.430 Compare (05h): Supported 00:31:13.430 Write Zeroes (08h): Supported LBA-Change 00:31:13.430 Dataset Management (09h): Supported LBA-Change 00:31:13.430 Copy (19h): Supported LBA-Change 00:31:13.430 00:31:13.430 Error Log 00:31:13.430 ========= 00:31:13.430 00:31:13.430 Arbitration 00:31:13.430 =========== 00:31:13.430 Arbitration Burst: 1 00:31:13.430 00:31:13.430 Power Management 00:31:13.430 ================ 00:31:13.430 Number of Power States: 1 00:31:13.430 Current Power State: Power State #0 00:31:13.430 Power State #0: 00:31:13.430 Max Power: 0.00 W 00:31:13.430 Non-Operational State: Operational 00:31:13.430 Entry Latency: Not Reported 00:31:13.430 Exit Latency: Not Reported 00:31:13.430 Relative Read Throughput: 0 00:31:13.430 Relative Read Latency: 0 00:31:13.430 Relative Write Throughput: 0 00:31:13.430 Relative Write Latency: 0 00:31:13.430 Idle Power: Not Reported 00:31:13.430 Active Power: Not Reported 00:31:13.430 Non-Operational Permissive Mode: Not Supported 00:31:13.430 00:31:13.430 Health Information 00:31:13.430 ================== 00:31:13.430 Critical Warnings: 00:31:13.430 Available Spare Space: OK 00:31:13.430 Temperature: OK 00:31:13.430 Device Reliability: OK 00:31:13.430 Read Only: No 00:31:13.430 Volatile Memory Backup: OK 00:31:13.430 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:13.430 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:13.430 Available Spare: 0% 00:31:13.430 Available Spare Threshold: 0% 00:31:13.430 Life Percentage Used:[2024-12-07 11:43:12.759304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.430 [2024-12-07 11:43:12.759314] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000025600) 00:31:13.430 [2024-12-07 11:43:12.759327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.430 [2024-12-07 11:43:12.759346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:13.430 [2024-12-07 11:43:12.759417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.430 [2024-12-07 11:43:12.759427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.430 [2024-12-07 11:43:12.759433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.430 [2024-12-07 11:43:12.759440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000025600 00:31:13.430 [2024-12-07 11:43:12.759488] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:31:13.430 [2024-12-07 11:43:12.759505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:13.430 [2024-12-07 11:43:12.759517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.430 [2024-12-07 11:43:12.759525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000025600 00:31:13.430 [2024-12-07 11:43:12.759534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.430 [2024-12-07 11:43:12.759541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000025600 00:31:13.430 [2024-12-07 11:43:12.759549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.430 [2024-12-07 11:43:12.759556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.430 [2024-12-07 11:43:12.759564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.430 [2024-12-07 11:43:12.759576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.430 [2024-12-07 11:43:12.759583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.430 [2024-12-07 11:43:12.759590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.430 [2024-12-07 11:43:12.759602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.430 [2024-12-07 11:43:12.759620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.430 [2024-12-07 11:43:12.759693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.430 [2024-12-07 11:43:12.759704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.430 [2024-12-07 11:43:12.759710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.430 [2024-12-07 11:43:12.759719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.430 [2024-12-07 11:43:12.759731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.430 [2024-12-07 11:43:12.759738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.430 [2024-12-07 11:43:12.759744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.430 [2024-12-07 11:43:12.759762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.430 [2024-12-07 11:43:12.759781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.430 [2024-12-07 11:43:12.759896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.430 [2024-12-07 11:43:12.759906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.430 [2024-12-07 11:43:12.759911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.430 [2024-12-07 11:43:12.759919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.430 [2024-12-07 11:43:12.759927] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:31:13.430 [2024-12-07 11:43:12.759935] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:31:13.430 [2024-12-07 11:43:12.759950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.430 [2024-12-07 11:43:12.759959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.430 [2024-12-07 11:43:12.759965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.430 [2024-12-07 11:43:12.759980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.430 [2024-12-07 11:43:12.759996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.430 [2024-12-07 11:43:12.760095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.430 [2024-12-07 11:43:12.760106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.430 [2024-12-07 11:43:12.760111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.430 [2024-12-07 11:43:12.760117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.430 [2024-12-07 11:43:12.760134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.430 [2024-12-07 11:43:12.760141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.430 [2024-12-07 11:43:12.760147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.430 [2024-12-07 11:43:12.760157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.430 [2024-12-07 11:43:12.760171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.430 [2024-12-07 11:43:12.760222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.430 [2024-12-07 11:43:12.760232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.430 [2024-12-07 11:43:12.760238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.431 [2024-12-07 11:43:12.760259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.431 [2024-12-07 11:43:12.760282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.431 [2024-12-07 11:43:12.760295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.431 [2024-12-07 11:43:12.760398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.431 [2024-12-07 11:43:12.760408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.431 [2024-12-07 11:43:12.760413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.431 [2024-12-07 11:43:12.760432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.431 [2024-12-07 11:43:12.760455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.431 [2024-12-07 11:43:12.760469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.431 [2024-12-07 11:43:12.760550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.431 [2024-12-07 11:43:12.760560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.431 [2024-12-07 11:43:12.760566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.431 [2024-12-07 11:43:12.760587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.431 [2024-12-07 11:43:12.760610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.431 [2024-12-07 11:43:12.760623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.431 [2024-12-07 11:43:12.760699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.431 [2024-12-07 11:43:12.760711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.431 [2024-12-07 11:43:12.760717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.431 [2024-12-07 11:43:12.760736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.431 [2024-12-07 11:43:12.760758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.431 [2024-12-07 11:43:12.760774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.431 [2024-12-07 11:43:12.760830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.431 [2024-12-07 11:43:12.760840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.431 [2024-12-07 11:43:12.760845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.431 [2024-12-07 11:43:12.760867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.760879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.431 [2024-12-07 11:43:12.760889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.431 [2024-12-07 11:43:12.760903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.431 [2024-12-07 11:43:12.761003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.431 [2024-12-07 11:43:12.761022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.431 [2024-12-07 11:43:12.761028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.761034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.431 [2024-12-07 11:43:12.761049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.761056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.761062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.431 [2024-12-07 11:43:12.761074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.431 [2024-12-07 11:43:12.761088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.431 [2024-12-07 11:43:12.761163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.431 [2024-12-07 11:43:12.761174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.431 [2024-12-07 11:43:12.761182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.761188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.431 [2024-12-07 11:43:12.761201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.761208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.761213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.431 [2024-12-07 11:43:12.761224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.431 [2024-12-07 11:43:12.761238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.431 [2024-12-07 11:43:12.761313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.431 [2024-12-07 11:43:12.761323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.431 [2024-12-07 11:43:12.761333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.761339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.431 [2024-12-07 11:43:12.761353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.761359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.761364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.431 [2024-12-07 11:43:12.761375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.431 [2024-12-07 11:43:12.761389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.431 [2024-12-07 11:43:12.761443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.431 [2024-12-07 11:43:12.761453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.431 [2024-12-07 11:43:12.761458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.761464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.431 [2024-12-07 11:43:12.761478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.761484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.431 [2024-12-07 11:43:12.761490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.431 [2024-12-07 11:43:12.761502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.432 [2024-12-07 11:43:12.761516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.432 [2024-12-07 11:43:12.761618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.432 [2024-12-07 11:43:12.761629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.432 [2024-12-07 11:43:12.761635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.432 [2024-12-07 11:43:12.761641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.432 [2024-12-07 11:43:12.761654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.432 [2024-12-07 11:43:12.761660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.432 [2024-12-07 11:43:12.761666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.432 [2024-12-07 11:43:12.761676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.432 [2024-12-07 11:43:12.761690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.432 [2024-12-07 11:43:12.761767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.432 [2024-12-07 11:43:12.761777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.432 [2024-12-07 11:43:12.761782] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.432 [2024-12-07 11:43:12.761788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.432 [2024-12-07 11:43:12.761801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.432 [2024-12-07 11:43:12.761807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.432 [2024-12-07 11:43:12.761813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.432 [2024-12-07 11:43:12.761824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.432 [2024-12-07 11:43:12.761837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.432 [2024-12-07 11:43:12.761926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.432 [2024-12-07 11:43:12.761936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.432 [2024-12-07 11:43:12.761941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.432 [2024-12-07 11:43:12.761947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.432 [2024-12-07 11:43:12.761962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.432 [2024-12-07 11:43:12.761969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.432 [2024-12-07 11:43:12.761974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.432 [2024-12-07 11:43:12.761985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.432 [2024-12-07 11:43:12.761999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.693 [2024-12-07 11:43:12.766029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.693 [2024-12-07 11:43:12.766048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.693 [2024-12-07 11:43:12.766055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.693 [2024-12-07 11:43:12.766061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.693 [2024-12-07 11:43:12.766077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.693 [2024-12-07 11:43:12.766088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.693 [2024-12-07 11:43:12.766094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:13.693 [2024-12-07 11:43:12.766106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.693 [2024-12-07 11:43:12.766125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.693 [2024-12-07 11:43:12.766200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.693 [2024-12-07 11:43:12.766210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.693 [2024-12-07 11:43:12.766219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.693 [2024-12-07 11:43:12.766228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:13.693 [2024-12-07 11:43:12.766240] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:31:13.693 0% 00:31:13.693 Data Units Read: 0 00:31:13.693 Data Units Written: 0 00:31:13.693 Host Read Commands: 0 00:31:13.693 Host Write Commands: 0 00:31:13.693 Controller Busy Time: 0 minutes 00:31:13.693 Power Cycles: 0 00:31:13.693 Power On Hours: 0 hours 00:31:13.694 Unsafe Shutdowns: 0 00:31:13.694 Unrecoverable Media Errors: 0 00:31:13.694 Lifetime Error Log Entries: 0 00:31:13.694 Warning Temperature Time: 0 minutes 00:31:13.694 Critical Temperature Time: 0 minutes 00:31:13.694 00:31:13.694 Number of Queues 00:31:13.694 ================ 00:31:13.694 Number of I/O Submission Queues: 127 00:31:13.694 Number of I/O Completion Queues: 127 00:31:13.694 00:31:13.694 Active Namespaces 00:31:13.694 ================= 00:31:13.694 Namespace ID:1 00:31:13.694 Error Recovery Timeout: Unlimited 00:31:13.694 Command Set Identifier: NVM (00h) 00:31:13.694 Deallocate: Supported 00:31:13.694 Deallocated/Unwritten Error: Not Supported 00:31:13.694 Deallocated Read Value: Unknown 00:31:13.694 Deallocate in Write Zeroes: Not Supported 00:31:13.694 Deallocated Guard Field: 0xFFFF 00:31:13.694 Flush: Supported 00:31:13.694 Reservation: Supported 00:31:13.694 Namespace Sharing Capabilities: Multiple Controllers 00:31:13.694 Size (in LBAs): 131072 (0GiB) 00:31:13.694 Capacity (in LBAs): 131072 (0GiB) 00:31:13.694 Utilization (in LBAs): 131072 (0GiB) 00:31:13.694 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:13.694 EUI64: ABCDEF0123456789 00:31:13.694 UUID: cc590480-d8ac-4e4a-ac2e-07ea6e2a2d24 00:31:13.694 Thin Provisioning: Not Supported 00:31:13.694 Per-NS Atomic Units: Yes 00:31:13.694 Atomic Boundary Size (Normal): 0 00:31:13.694 Atomic Boundary Size (PFail): 0 00:31:13.694 Atomic Boundary Offset: 0 00:31:13.694 Maximum Single Source Range Length: 65535 00:31:13.694 Maximum Copy Length: 65535 00:31:13.694 Maximum Source Range Count: 1 00:31:13.694 NGUID/EUI64 Never Reused: No 00:31:13.694 Namespace Write Protected: No 00:31:13.694 Number of LBA Formats: 1 00:31:13.694 Current LBA Format: LBA Format #00 00:31:13.694 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:13.694 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.694 rmmod nvme_tcp 00:31:13.694 rmmod nvme_fabrics 00:31:13.694 rmmod nvme_keyring 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2674176 ']' 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2674176 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2674176 ']' 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2674176 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2674176 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2674176' 00:31:13.694 killing process with pid 2674176 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2674176 00:31:13.694 11:43:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2674176 00:31:14.638 11:43:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:14.638 11:43:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:14.638 11:43:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:14.638 11:43:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:31:14.638 11:43:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:14.638 11:43:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:31:14.638 11:43:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:31:14.638 11:43:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:14.638 11:43:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:14.638 11:43:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.638 11:43:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.638 11:43:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.185 11:43:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.185 00:31:17.185 real 0m12.566s 00:31:17.185 user 0m10.831s 00:31:17.185 sys 0m6.337s 00:31:17.185 11:43:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.185 11:43:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:17.185 ************************************ 00:31:17.185 END TEST nvmf_identify 00:31:17.185 ************************************ 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.185 ************************************ 00:31:17.185 START TEST nvmf_perf 00:31:17.185 ************************************ 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:17.185 * Looking for test storage... 00:31:17.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:17.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.185 --rc genhtml_branch_coverage=1 00:31:17.185 --rc genhtml_function_coverage=1 00:31:17.185 --rc genhtml_legend=1 00:31:17.185 --rc geninfo_all_blocks=1 00:31:17.185 --rc geninfo_unexecuted_blocks=1 00:31:17.185 00:31:17.185 ' 00:31:17.185 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:17.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.185 --rc genhtml_branch_coverage=1 00:31:17.185 --rc genhtml_function_coverage=1 00:31:17.185 --rc genhtml_legend=1 00:31:17.185 --rc geninfo_all_blocks=1 00:31:17.186 --rc geninfo_unexecuted_blocks=1 00:31:17.186 00:31:17.186 ' 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:17.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.186 --rc genhtml_branch_coverage=1 00:31:17.186 --rc genhtml_function_coverage=1 00:31:17.186 --rc genhtml_legend=1 00:31:17.186 --rc geninfo_all_blocks=1 00:31:17.186 --rc geninfo_unexecuted_blocks=1 00:31:17.186 00:31:17.186 ' 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:17.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.186 --rc genhtml_branch_coverage=1 00:31:17.186 --rc genhtml_function_coverage=1 00:31:17.186 --rc genhtml_legend=1 00:31:17.186 --rc geninfo_all_blocks=1 00:31:17.186 --rc geninfo_unexecuted_blocks=1 00:31:17.186 00:31:17.186 ' 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:17.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:17.186 11:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:25.328 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:25.328 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:25.328 Found net devices under 0000:31:00.0: cvl_0_0 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:25.328 Found net devices under 0000:31:00.1: cvl_0_1 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:25.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:31:25.328 00:31:25.328 --- 10.0.0.2 ping statistics --- 00:31:25.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.328 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:25.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:31:25.328 00:31:25.328 --- 10.0.0.1 ping statistics --- 00:31:25.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.328 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:31:25.328 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2678909 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2678909 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2678909 ']' 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:25.329 11:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:25.329 [2024-12-07 11:43:23.880334] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:25.329 [2024-12-07 11:43:23.880437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.329 [2024-12-07 11:43:24.020400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:25.329 [2024-12-07 11:43:24.120055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.329 [2024-12-07 11:43:24.120099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.329 [2024-12-07 11:43:24.120112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.329 [2024-12-07 11:43:24.120125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.329 [2024-12-07 11:43:24.120134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.329 [2024-12-07 11:43:24.122363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.329 [2024-12-07 11:43:24.122518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:25.329 [2024-12-07 11:43:24.122638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.329 [2024-12-07 11:43:24.122657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:25.329 11:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:25.329 11:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:31:25.329 11:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:25.329 11:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:25.329 11:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:25.590 11:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:25.590 11:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:25.590 11:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:26.160 11:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:26.160 11:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:26.160 11:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:31:26.160 11:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:26.420 11:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:26.420 11:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:31:26.420 11:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:26.420 11:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:26.420 11:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:26.681 [2024-12-07 11:43:25.809917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.681 11:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:26.681 11:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:26.681 11:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:26.943 11:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:26.943 11:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:27.269 11:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:27.269 [2024-12-07 11:43:26.552843] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.269 11:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:27.661 11:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:31:27.661 11:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:27.661 11:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:27.662 11:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:29.058 Initializing NVMe Controllers 00:31:29.058 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:31:29.058 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:31:29.058 Initialization complete. Launching workers. 00:31:29.058 ======================================================== 00:31:29.058 Latency(us) 00:31:29.058 Device Information : IOPS MiB/s Average min max 00:31:29.058 PCIE (0000:65:00.0) NSID 1 from core 0: 74047.41 289.25 431.64 14.16 4869.43 00:31:29.058 ======================================================== 00:31:29.058 Total : 74047.41 289.25 431.64 14.16 4869.43 00:31:29.058 00:31:29.058 11:43:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:30.989 Initializing NVMe Controllers 00:31:30.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:30.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:30.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:30.989 Initialization complete. Launching workers. 00:31:30.989 ======================================================== 00:31:30.989 Latency(us) 00:31:30.989 Device Information : IOPS MiB/s Average min max 00:31:30.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 102.00 0.40 10171.59 168.93 45951.65 00:31:30.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 43.00 0.17 23756.58 7958.95 47892.77 00:31:30.989 ======================================================== 00:31:30.989 Total : 145.00 0.57 14200.24 168.93 47892.77 00:31:30.989 00:31:30.989 11:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:31.930 Initializing NVMe Controllers 00:31:31.931 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:31.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:31.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:31.931 Initialization complete. Launching workers. 00:31:31.931 ======================================================== 00:31:31.931 Latency(us) 00:31:31.931 Device Information : IOPS MiB/s Average min max 00:31:31.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9388.97 36.68 3420.06 628.42 10166.59 00:31:31.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3691.99 14.42 8706.18 5065.52 16301.33 00:31:31.931 ======================================================== 00:31:31.931 Total : 13080.96 51.10 4912.02 628.42 16301.33 00:31:31.931 00:31:32.192 11:43:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:32.192 11:43:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:32.192 11:43:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:34.738 Initializing NVMe Controllers 00:31:34.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:34.738 Controller IO queue size 128, less than required. 00:31:34.738 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:34.738 Controller IO queue size 128, less than required. 00:31:34.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:34.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:34.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:34.739 Initialization complete. Launching workers. 00:31:34.739 ======================================================== 00:31:34.739 Latency(us) 00:31:34.739 Device Information : IOPS MiB/s Average min max 00:31:34.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1399.48 349.87 95331.38 56245.87 240131.36 00:31:34.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 551.99 138.00 243030.95 122116.52 458272.88 00:31:34.739 ======================================================== 00:31:34.739 Total : 1951.48 487.87 137109.59 56245.87 458272.88 00:31:34.739 00:31:34.739 11:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:35.310 No valid NVMe controllers or AIO or URING devices found 00:31:35.310 Initializing NVMe Controllers 00:31:35.310 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:35.310 Controller IO queue size 128, less than required. 00:31:35.310 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:35.310 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:35.310 Controller IO queue size 128, less than required. 00:31:35.310 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:35.310 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:35.310 WARNING: Some requested NVMe devices were skipped 00:31:35.310 11:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:38.611 Initializing NVMe Controllers 00:31:38.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:38.611 Controller IO queue size 128, less than required. 00:31:38.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:38.611 Controller IO queue size 128, less than required. 00:31:38.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:38.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:38.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:38.611 Initialization complete. Launching workers. 00:31:38.611 00:31:38.611 ==================== 00:31:38.611 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:38.611 TCP transport: 00:31:38.611 polls: 17158 00:31:38.611 idle_polls: 7303 00:31:38.611 sock_completions: 9855 00:31:38.611 nvme_completions: 5849 00:31:38.611 submitted_requests: 8844 00:31:38.611 queued_requests: 1 00:31:38.611 00:31:38.611 ==================== 00:31:38.611 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:38.611 TCP transport: 00:31:38.611 polls: 19784 00:31:38.611 idle_polls: 9378 00:31:38.611 sock_completions: 10406 00:31:38.611 nvme_completions: 5673 00:31:38.611 submitted_requests: 8482 00:31:38.611 queued_requests: 1 00:31:38.611 ======================================================== 00:31:38.611 Latency(us) 00:31:38.611 Device Information : IOPS MiB/s Average min max 00:31:38.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1461.75 365.44 92654.33 49043.12 316698.10 00:31:38.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1417.75 354.44 92391.38 50611.52 333409.67 00:31:38.611 ======================================================== 00:31:38.611 Total : 2879.50 719.88 92524.86 49043.12 333409.67 00:31:38.611 00:31:38.611 11:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:38.611 11:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:38.611 11:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:38.611 11:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:31:38.611 11:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:39.551 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=4e96c186-ffe9-41c7-beb4-ca6e3bcef921 00:31:39.551 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 4e96c186-ffe9-41c7-beb4-ca6e3bcef921 00:31:39.551 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=4e96c186-ffe9-41c7-beb4-ca6e3bcef921 00:31:39.551 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:39.551 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:39.551 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:39.551 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:39.551 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:39.551 { 00:31:39.552 "uuid": "4e96c186-ffe9-41c7-beb4-ca6e3bcef921", 00:31:39.552 "name": "lvs_0", 00:31:39.552 "base_bdev": "Nvme0n1", 00:31:39.552 "total_data_clusters": 457407, 00:31:39.552 "free_clusters": 457407, 00:31:39.552 "block_size": 512, 00:31:39.552 "cluster_size": 4194304 00:31:39.552 } 00:31:39.552 ]' 00:31:39.552 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="4e96c186-ffe9-41c7-beb4-ca6e3bcef921") .free_clusters' 00:31:39.552 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=457407 00:31:39.552 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="4e96c186-ffe9-41c7-beb4-ca6e3bcef921") .cluster_size' 00:31:39.812 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:39.812 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=1829628 00:31:39.812 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 1829628 00:31:39.812 1829628 00:31:39.812 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:31:39.812 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:39.812 11:43:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4e96c186-ffe9-41c7-beb4-ca6e3bcef921 lbd_0 20480 00:31:39.812 11:43:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=2aecc7cb-9b62-4674-9db2-674698464dbd 00:31:39.812 11:43:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 2aecc7cb-9b62-4674-9db2-674698464dbd lvs_n_0 00:31:41.725 11:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=07962222-6c01-460c-a451-f300ca1d550a 00:31:41.725 11:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 07962222-6c01-460c-a451-f300ca1d550a 00:31:41.725 11:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=07962222-6c01-460c-a451-f300ca1d550a 00:31:41.725 11:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:41.725 11:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:41.725 11:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:41.725 11:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:41.725 11:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:41.725 { 00:31:41.725 "uuid": "4e96c186-ffe9-41c7-beb4-ca6e3bcef921", 00:31:41.725 "name": "lvs_0", 00:31:41.725 "base_bdev": "Nvme0n1", 00:31:41.725 "total_data_clusters": 457407, 00:31:41.725 "free_clusters": 452287, 00:31:41.725 "block_size": 512, 00:31:41.725 "cluster_size": 4194304 00:31:41.725 }, 00:31:41.725 { 00:31:41.725 "uuid": "07962222-6c01-460c-a451-f300ca1d550a", 00:31:41.725 "name": "lvs_n_0", 00:31:41.725 "base_bdev": "2aecc7cb-9b62-4674-9db2-674698464dbd", 00:31:41.725 "total_data_clusters": 5114, 00:31:41.725 "free_clusters": 5114, 00:31:41.725 "block_size": 512, 00:31:41.725 "cluster_size": 4194304 00:31:41.725 } 00:31:41.725 ]' 00:31:41.725 11:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="07962222-6c01-460c-a451-f300ca1d550a") .free_clusters' 00:31:41.725 11:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:41.725 11:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="07962222-6c01-460c-a451-f300ca1d550a") .cluster_size' 00:31:41.725 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:41.725 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:41.725 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:41.725 20456 00:31:41.725 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:41.725 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 07962222-6c01-460c-a451-f300ca1d550a lbd_nest_0 20456 00:31:41.985 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=3fa7c77b-7526-4879-8f7b-b226e895ad96 00:31:41.985 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:42.244 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:42.244 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 3fa7c77b-7526-4879-8f7b-b226e895ad96 00:31:42.244 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:42.505 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:42.505 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:42.505 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:42.505 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:42.505 11:43:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:54.733 Initializing NVMe Controllers 00:31:54.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:54.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:54.733 Initialization complete. Launching workers. 00:31:54.733 ======================================================== 00:31:54.733 Latency(us) 00:31:54.733 Device Information : IOPS MiB/s Average min max 00:31:54.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.50 0.02 21528.19 245.58 46311.24 00:31:54.733 ======================================================== 00:31:54.733 Total : 46.50 0.02 21528.19 245.58 46311.24 00:31:54.733 00:31:54.733 11:43:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:54.733 11:43:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:04.724 Initializing NVMe Controllers 00:32:04.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:04.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:04.724 Initialization complete. Launching workers. 00:32:04.724 ======================================================== 00:32:04.724 Latency(us) 00:32:04.724 Device Information : IOPS MiB/s Average min max 00:32:04.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 67.79 8.47 14750.85 5031.92 51878.18 00:32:04.724 ======================================================== 00:32:04.724 Total : 67.79 8.47 14750.85 5031.92 51878.18 00:32:04.724 00:32:04.724 11:44:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:04.724 11:44:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:04.724 11:44:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:14.720 Initializing NVMe Controllers 00:32:14.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:14.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:14.720 Initialization complete. Launching workers. 00:32:14.720 ======================================================== 00:32:14.720 Latency(us) 00:32:14.720 Device Information : IOPS MiB/s Average min max 00:32:14.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8435.54 4.12 3793.52 317.65 10293.56 00:32:14.720 ======================================================== 00:32:14.720 Total : 8435.54 4.12 3793.52 317.65 10293.56 00:32:14.720 00:32:14.720 11:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:14.720 11:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:24.708 Initializing NVMe Controllers 00:32:24.708 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:24.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:24.708 Initialization complete. Launching workers. 00:32:24.708 ======================================================== 00:32:24.708 Latency(us) 00:32:24.708 Device Information : IOPS MiB/s Average min max 00:32:24.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3434.94 429.37 9316.53 636.08 22917.80 00:32:24.708 ======================================================== 00:32:24.708 Total : 3434.94 429.37 9316.53 636.08 22917.80 00:32:24.708 00:32:24.708 11:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:24.708 11:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:24.708 11:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:34.761 Initializing NVMe Controllers 00:32:34.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:34.761 Controller IO queue size 128, less than required. 00:32:34.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:34.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:34.761 Initialization complete. Launching workers. 00:32:34.761 ======================================================== 00:32:34.761 Latency(us) 00:32:34.761 Device Information : IOPS MiB/s Average min max 00:32:34.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15737.53 7.68 8133.31 1816.96 22432.68 00:32:34.761 ======================================================== 00:32:34.761 Total : 15737.53 7.68 8133.31 1816.96 22432.68 00:32:34.761 00:32:34.761 11:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:34.761 11:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:47.023 Initializing NVMe Controllers 00:32:47.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:47.023 Controller IO queue size 128, less than required. 00:32:47.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:47.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:47.023 Initialization complete. Launching workers. 00:32:47.023 ======================================================== 00:32:47.023 Latency(us) 00:32:47.023 Device Information : IOPS MiB/s Average min max 00:32:47.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1135.22 141.90 113322.58 15947.73 246617.74 00:32:47.023 ======================================================== 00:32:47.023 Total : 1135.22 141.90 113322.58 15947.73 246617.74 00:32:47.023 00:32:47.023 11:44:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.023 11:44:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3fa7c77b-7526-4879-8f7b-b226e895ad96 00:32:47.023 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:47.023 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2aecc7cb-9b62-4674-9db2-674698464dbd 00:32:47.282 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:47.282 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:47.282 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:47.282 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.282 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:47.543 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.543 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.544 rmmod nvme_tcp 00:32:47.544 rmmod nvme_fabrics 00:32:47.544 rmmod nvme_keyring 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2678909 ']' 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2678909 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2678909 ']' 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2678909 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2678909 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2678909' 00:32:47.544 killing process with pid 2678909 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2678909 00:32:47.544 11:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2678909 00:32:50.089 11:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:50.090 11:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:50.090 11:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:50.090 11:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:50.090 11:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:50.090 11:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:50.090 11:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:50.090 11:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:50.090 11:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:50.090 11:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.090 11:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.090 11:44:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:52.637 00:32:52.637 real 1m35.377s 00:32:52.637 user 5m36.512s 00:32:52.637 sys 0m15.797s 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:52.637 ************************************ 00:32:52.637 END TEST nvmf_perf 00:32:52.637 ************************************ 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.637 ************************************ 00:32:52.637 START TEST nvmf_fio_host 00:32:52.637 ************************************ 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:52.637 * Looking for test storage... 00:32:52.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:52.637 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:52.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.638 --rc genhtml_branch_coverage=1 00:32:52.638 --rc genhtml_function_coverage=1 00:32:52.638 --rc genhtml_legend=1 00:32:52.638 --rc geninfo_all_blocks=1 00:32:52.638 --rc geninfo_unexecuted_blocks=1 00:32:52.638 00:32:52.638 ' 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:52.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.638 --rc genhtml_branch_coverage=1 00:32:52.638 --rc genhtml_function_coverage=1 00:32:52.638 --rc genhtml_legend=1 00:32:52.638 --rc geninfo_all_blocks=1 00:32:52.638 --rc geninfo_unexecuted_blocks=1 00:32:52.638 00:32:52.638 ' 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:52.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.638 --rc genhtml_branch_coverage=1 00:32:52.638 --rc genhtml_function_coverage=1 00:32:52.638 --rc genhtml_legend=1 00:32:52.638 --rc geninfo_all_blocks=1 00:32:52.638 --rc geninfo_unexecuted_blocks=1 00:32:52.638 00:32:52.638 ' 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:52.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.638 --rc genhtml_branch_coverage=1 00:32:52.638 --rc genhtml_function_coverage=1 00:32:52.638 --rc genhtml_legend=1 00:32:52.638 --rc geninfo_all_blocks=1 00:32:52.638 --rc geninfo_unexecuted_blocks=1 00:32:52.638 00:32:52.638 ' 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.638 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:52.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:52.639 11:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:00.871 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:00.871 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:00.871 Found net devices under 0000:31:00.0: cvl_0_0 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.871 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:00.872 Found net devices under 0000:31:00.1: cvl_0_1 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:00.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:00.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:33:00.872 00:33:00.872 --- 10.0.0.2 ping statistics --- 00:33:00.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.872 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:33:00.872 11:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:00.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:00.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:33:00.872 00:33:00.872 --- 10.0.0.1 ping statistics --- 00:33:00.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.872 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2699122 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2699122 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2699122 ']' 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.872 [2024-12-07 11:44:59.147467] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:33:00.872 [2024-12-07 11:44:59.147602] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.872 [2024-12-07 11:44:59.295548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:00.872 [2024-12-07 11:44:59.396097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.872 [2024-12-07 11:44:59.396142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.872 [2024-12-07 11:44:59.396153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.872 [2024-12-07 11:44:59.396165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.872 [2024-12-07 11:44:59.396173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.872 [2024-12-07 11:44:59.398406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.872 [2024-12-07 11:44:59.398487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.872 [2024-12-07 11:44:59.398603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.872 [2024-12-07 11:44:59.398627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:33:00.872 11:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:00.872 [2024-12-07 11:45:00.073730] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.872 11:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:00.872 11:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:00.872 11:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.872 11:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:01.142 Malloc1 00:33:01.142 11:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:01.403 11:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:01.663 11:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:01.663 [2024-12-07 11:45:00.930074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.663 11:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:01.923 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:01.923 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:01.923 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:01.923 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:01.923 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:01.923 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:01.923 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:01.923 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:01.923 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:01.923 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:01.923 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:01.923 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:01.923 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:01.924 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:01.924 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:01.924 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:01.924 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:01.924 11:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:02.491 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:02.491 fio-3.35 00:33:02.491 Starting 1 thread 00:33:05.033 00:33:05.033 test: (groupid=0, jobs=1): err= 0: pid=2699984: Sat Dec 7 11:45:03 2024 00:33:05.033 read: IOPS=11.8k, BW=46.0MiB/s (48.2MB/s)(92.2MiB/2005msec) 00:33:05.033 slat (usec): min=2, max=319, avg= 2.34, stdev= 2.93 00:33:05.033 clat (usec): min=4348, max=11210, avg=5983.74, stdev=861.84 00:33:05.033 lat (usec): min=4350, max=11224, avg=5986.08, stdev=862.04 00:33:05.033 clat percentiles (usec): 00:33:05.033 | 1.00th=[ 4883], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5473], 00:33:05.033 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5866], 00:33:05.033 | 70.00th=[ 5997], 80.00th=[ 6194], 90.00th=[ 6980], 95.00th=[ 8225], 00:33:05.033 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[10290], 99.95th=[10814], 00:33:05.033 | 99.99th=[11207] 00:33:05.033 bw ( KiB/s): min=39320, max=49968, per=100.00%, avg=47080.00, stdev=5183.82, samples=4 00:33:05.033 iops : min= 9830, max=12492, avg=11770.00, stdev=1295.96, samples=4 00:33:05.033 write: IOPS=11.7k, BW=45.7MiB/s (48.0MB/s)(91.7MiB/2005msec); 0 zone resets 00:33:05.033 slat (usec): min=2, max=312, avg= 2.42, stdev= 2.26 00:33:05.033 clat (usec): min=3287, max=9693, avg=4858.87, stdev=743.73 00:33:05.033 lat (usec): min=3308, max=9701, avg=4861.30, stdev=743.97 00:33:05.033 clat percentiles (usec): 00:33:05.033 | 1.00th=[ 3884], 5.00th=[ 4146], 10.00th=[ 4293], 20.00th=[ 4424], 00:33:05.033 | 30.00th=[ 4490], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:33:05.033 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 6063], 95.00th=[ 6783], 00:33:05.033 | 99.00th=[ 7373], 99.50th=[ 7635], 99.90th=[ 8717], 99.95th=[ 9110], 00:33:05.033 | 99.99th=[ 9372] 00:33:05.033 bw ( KiB/s): min=39824, max=49664, per=99.94%, avg=46800.00, stdev=4667.01, samples=4 00:33:05.033 iops : min= 9956, max=12416, avg=11700.00, stdev=1166.75, samples=4 00:33:05.033 lat (msec) : 4=0.98%, 10=98.94%, 20=0.08% 00:33:05.033 cpu : usr=79.79%, sys=19.41%, ctx=25, majf=0, minf=1544 00:33:05.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:05.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:05.034 issued rwts: total=23594,23473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:05.034 00:33:05.034 Run status group 0 (all jobs): 00:33:05.034 READ: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=92.2MiB (96.6MB), run=2005-2005msec 00:33:05.034 WRITE: bw=45.7MiB/s (48.0MB/s), 45.7MiB/s-45.7MiB/s (48.0MB/s-48.0MB/s), io=91.7MiB (96.1MB), run=2005-2005msec 00:33:05.034 ----------------------------------------------------- 00:33:05.034 Suppressions used: 00:33:05.034 count bytes template 00:33:05.034 1 57 /usr/src/fio/parse.c 00:33:05.034 1 8 libtcmalloc_minimal.so 00:33:05.034 ----------------------------------------------------- 00:33:05.034 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:05.034 11:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:05.620 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:05.620 fio-3.35 00:33:05.620 Starting 1 thread 00:33:06.559 [2024-12-07 11:45:05.662988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:07.938 00:33:07.938 test: (groupid=0, jobs=1): err= 0: pid=2700635: Sat Dec 7 11:45:07 2024 00:33:07.938 read: IOPS=8879, BW=139MiB/s (145MB/s)(279MiB/2008msec) 00:33:07.938 slat (usec): min=3, max=118, avg= 3.87, stdev= 1.78 00:33:07.938 clat (usec): min=2287, max=19206, avg=8677.84, stdev=2070.17 00:33:07.938 lat (usec): min=2290, max=19210, avg=8681.71, stdev=2070.37 00:33:07.938 clat percentiles (usec): 00:33:07.939 | 1.00th=[ 4555], 5.00th=[ 5538], 10.00th=[ 6128], 20.00th=[ 6915], 00:33:07.939 | 30.00th=[ 7504], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9110], 00:33:07.939 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11076], 95.00th=[11863], 00:33:07.939 | 99.00th=[14222], 99.50th=[15008], 99.90th=[16909], 99.95th=[18744], 00:33:07.939 | 99.99th=[19268] 00:33:07.939 bw ( KiB/s): min=62016, max=77408, per=50.02%, avg=71064.00, stdev=7323.80, samples=4 00:33:07.939 iops : min= 3876, max= 4838, avg=4441.50, stdev=457.74, samples=4 00:33:07.939 write: IOPS=5199, BW=81.2MiB/s (85.2MB/s)(144MiB/1775msec); 0 zone resets 00:33:07.939 slat (usec): min=40, max=364, avg=41.88, stdev= 8.36 00:33:07.939 clat (usec): min=2456, max=17285, avg=10157.34, stdev=1752.42 00:33:07.939 lat (usec): min=2497, max=17326, avg=10199.22, stdev=1754.69 00:33:07.939 clat percentiles (usec): 00:33:07.939 | 1.00th=[ 6915], 5.00th=[ 7767], 10.00th=[ 8225], 20.00th=[ 8848], 00:33:07.939 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10290], 00:33:07.939 | 70.00th=[10814], 80.00th=[11338], 90.00th=[12256], 95.00th=[13304], 00:33:07.939 | 99.00th=[15926], 99.50th=[16319], 99.90th=[16909], 99.95th=[17171], 00:33:07.939 | 99.99th=[17171] 00:33:07.939 bw ( KiB/s): min=65824, max=80448, per=88.75%, avg=73832.00, stdev=7438.30, samples=4 00:33:07.939 iops : min= 4114, max= 5028, avg=4614.50, stdev=464.89, samples=4 00:33:07.939 lat (msec) : 4=0.40%, 10=64.71%, 20=34.89% 00:33:07.939 cpu : usr=89.04%, sys=10.01%, ctx=14, majf=0, minf=2328 00:33:07.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:33:07.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:07.939 issued rwts: total=17830,9229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:07.939 00:33:07.939 Run status group 0 (all jobs): 00:33:07.939 READ: bw=139MiB/s (145MB/s), 139MiB/s-139MiB/s (145MB/s-145MB/s), io=279MiB (292MB), run=2008-2008msec 00:33:07.939 WRITE: bw=81.2MiB/s (85.2MB/s), 81.2MiB/s-81.2MiB/s (85.2MB/s-85.2MB/s), io=144MiB (151MB), run=1775-1775msec 00:33:08.198 ----------------------------------------------------- 00:33:08.198 Suppressions used: 00:33:08.198 count bytes template 00:33:08.198 1 57 /usr/src/fio/parse.c 00:33:08.198 282 27072 /usr/src/fio/iolog.c 00:33:08.198 1 8 libtcmalloc_minimal.so 00:33:08.198 ----------------------------------------------------- 00:33:08.198 00:33:08.198 11:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:08.457 11:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:08.457 11:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:08.457 11:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:08.457 11:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:08.457 11:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:33:08.457 11:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:08.457 11:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:08.457 11:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:08.457 11:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:08.457 11:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:33:08.457 11:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:33:09.025 Nvme0n1 00:33:09.026 11:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:09.597 11:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=34903b93-0bc4-4db5-83ba-e3303c678ed1 00:33:09.597 11:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 34903b93-0bc4-4db5-83ba-e3303c678ed1 00:33:09.597 11:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=34903b93-0bc4-4db5-83ba-e3303c678ed1 00:33:09.597 11:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:09.597 11:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:09.597 11:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:09.597 11:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:09.597 11:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:09.597 { 00:33:09.597 "uuid": "34903b93-0bc4-4db5-83ba-e3303c678ed1", 00:33:09.597 "name": "lvs_0", 00:33:09.597 "base_bdev": "Nvme0n1", 00:33:09.597 "total_data_clusters": 1787, 00:33:09.597 "free_clusters": 1787, 00:33:09.597 "block_size": 512, 00:33:09.597 "cluster_size": 1073741824 00:33:09.597 } 00:33:09.597 ]' 00:33:09.597 11:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="34903b93-0bc4-4db5-83ba-e3303c678ed1") .free_clusters' 00:33:09.856 11:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1787 00:33:09.856 11:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="34903b93-0bc4-4db5-83ba-e3303c678ed1") .cluster_size' 00:33:09.856 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:33:09.856 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1829888 00:33:09.856 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1829888 00:33:09.856 1829888 00:33:09.856 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:33:09.856 5b041c15-a733-4d0e-85d9-953d4ce4e64b 00:33:10.117 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:10.117 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:10.378 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:10.639 11:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:10.900 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:10.900 fio-3.35 00:33:10.900 Starting 1 thread 00:33:13.444 00:33:13.444 test: (groupid=0, jobs=1): err= 0: pid=2702313: Sat Dec 7 11:45:12 2024 00:33:13.444 read: IOPS=9205, BW=36.0MiB/s (37.7MB/s)(72.1MiB/2006msec) 00:33:13.444 slat (usec): min=2, max=118, avg= 2.36, stdev= 1.19 00:33:13.444 clat (usec): min=2734, max=12614, avg=7665.96, stdev=584.96 00:33:13.444 lat (usec): min=2753, max=12617, avg=7668.32, stdev=584.88 00:33:13.444 clat percentiles (usec): 00:33:13.444 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7242], 00:33:13.444 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7832], 00:33:13.444 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8356], 95.00th=[ 8586], 00:33:13.444 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[10945], 99.95th=[11731], 00:33:13.444 | 99.99th=[12649] 00:33:13.444 bw ( KiB/s): min=35624, max=37392, per=99.89%, avg=36784.00, stdev=791.45, samples=4 00:33:13.444 iops : min= 8906, max= 9348, avg=9196.00, stdev=197.86, samples=4 00:33:13.444 write: IOPS=9210, BW=36.0MiB/s (37.7MB/s)(72.2MiB/2006msec); 0 zone resets 00:33:13.444 slat (nsec): min=2242, max=102845, avg=2437.80, stdev=833.12 00:33:13.444 clat (usec): min=1438, max=11578, avg=6137.64, stdev=500.02 00:33:13.444 lat (usec): min=1447, max=11581, avg=6140.08, stdev=499.99 00:33:13.444 clat percentiles (usec): 00:33:13.444 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:33:13.444 | 30.00th=[ 5932], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:33:13.444 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6915], 00:33:13.444 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 9372], 99.95th=[10421], 00:33:13.444 | 99.99th=[11076] 00:33:13.444 bw ( KiB/s): min=36416, max=37120, per=100.00%, avg=36848.00, stdev=319.47, samples=4 00:33:13.444 iops : min= 9104, max= 9280, avg=9212.00, stdev=79.87, samples=4 00:33:13.444 lat (msec) : 2=0.01%, 4=0.10%, 10=99.75%, 20=0.14% 00:33:13.444 cpu : usr=78.95%, sys=20.30%, ctx=46, majf=0, minf=1540 00:33:13.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:13.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:13.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:13.444 issued rwts: total=18467,18476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:13.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:13.444 00:33:13.444 Run status group 0 (all jobs): 00:33:13.444 READ: bw=36.0MiB/s (37.7MB/s), 36.0MiB/s-36.0MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:33:13.444 WRITE: bw=36.0MiB/s (37.7MB/s), 36.0MiB/s-36.0MiB/s (37.7MB/s-37.7MB/s), io=72.2MiB (75.7MB), run=2006-2006msec 00:33:13.706 ----------------------------------------------------- 00:33:13.706 Suppressions used: 00:33:13.706 count bytes template 00:33:13.706 1 58 /usr/src/fio/parse.c 00:33:13.706 1 8 libtcmalloc_minimal.so 00:33:13.706 ----------------------------------------------------- 00:33:13.706 00:33:13.706 11:45:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:13.967 11:45:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:14.540 11:45:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=fc6dcefa-d42d-4406-ace0-42044a96ecd9 00:33:14.540 11:45:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb fc6dcefa-d42d-4406-ace0-42044a96ecd9 00:33:14.540 11:45:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=fc6dcefa-d42d-4406-ace0-42044a96ecd9 00:33:14.540 11:45:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:14.540 11:45:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:14.540 11:45:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:14.540 11:45:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:14.801 11:45:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:14.801 { 00:33:14.801 "uuid": "34903b93-0bc4-4db5-83ba-e3303c678ed1", 00:33:14.801 "name": "lvs_0", 00:33:14.801 "base_bdev": "Nvme0n1", 00:33:14.801 "total_data_clusters": 1787, 00:33:14.801 "free_clusters": 0, 00:33:14.801 "block_size": 512, 00:33:14.801 "cluster_size": 1073741824 00:33:14.801 }, 00:33:14.801 { 00:33:14.801 "uuid": "fc6dcefa-d42d-4406-ace0-42044a96ecd9", 00:33:14.801 "name": "lvs_n_0", 00:33:14.801 "base_bdev": "5b041c15-a733-4d0e-85d9-953d4ce4e64b", 00:33:14.801 "total_data_clusters": 457025, 00:33:14.802 "free_clusters": 457025, 00:33:14.802 "block_size": 512, 00:33:14.802 "cluster_size": 4194304 00:33:14.802 } 00:33:14.802 ]' 00:33:14.802 11:45:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="fc6dcefa-d42d-4406-ace0-42044a96ecd9") .free_clusters' 00:33:14.802 11:45:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=457025 00:33:14.802 11:45:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="fc6dcefa-d42d-4406-ace0-42044a96ecd9") .cluster_size' 00:33:14.802 11:45:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:33:14.802 11:45:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1828100 00:33:14.802 11:45:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1828100 00:33:14.802 1828100 00:33:14.802 11:45:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:33:17.346 5643b468-cf83-42cd-aa88-e740ad00f820 00:33:17.346 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:17.346 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:17.606 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:17.866 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:17.866 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:17.866 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:17.866 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:17.866 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:17.866 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:17.866 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:17.866 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:17.866 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:17.866 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:17.866 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:17.866 11:45:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:17.866 11:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:17.866 11:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:17.866 11:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:17.866 11:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:17.866 11:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:18.126 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:18.126 fio-3.35 00:33:18.126 Starting 1 thread 00:33:20.671 00:33:20.671 test: (groupid=0, jobs=1): err= 0: pid=2703900: Sat Dec 7 11:45:19 2024 00:33:20.671 read: IOPS=8206, BW=32.1MiB/s (33.6MB/s)(64.3MiB/2007msec) 00:33:20.671 slat (usec): min=2, max=121, avg= 2.37, stdev= 1.28 00:33:20.671 clat (usec): min=3177, max=14152, avg=8596.40, stdev=661.94 00:33:20.671 lat (usec): min=3199, max=14154, avg=8598.76, stdev=661.86 00:33:20.671 clat percentiles (usec): 00:33:20.671 | 1.00th=[ 7046], 5.00th=[ 7570], 10.00th=[ 7767], 20.00th=[ 8094], 00:33:20.671 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:33:20.671 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:33:20.671 | 99.00th=[10028], 99.50th=[10290], 99.90th=[11994], 99.95th=[13042], 00:33:20.671 | 99.99th=[14091] 00:33:20.671 bw ( KiB/s): min=31584, max=33552, per=99.97%, avg=32816.00, stdev=863.90, samples=4 00:33:20.671 iops : min= 7896, max= 8388, avg=8204.00, stdev=215.98, samples=4 00:33:20.671 write: IOPS=8216, BW=32.1MiB/s (33.7MB/s)(64.4MiB/2007msec); 0 zone resets 00:33:20.671 slat (nsec): min=2231, max=107031, avg=2446.69, stdev=917.06 00:33:20.671 clat (usec): min=1582, max=13160, avg=6859.62, stdev=585.18 00:33:20.671 lat (usec): min=1593, max=13162, avg=6862.07, stdev=585.12 00:33:20.671 clat percentiles (usec): 00:33:20.671 | 1.00th=[ 5538], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:33:20.671 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:33:20.671 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7701], 00:33:20.671 | 99.00th=[ 8160], 99.50th=[ 8291], 99.90th=[11469], 99.95th=[12125], 00:33:20.671 | 99.99th=[13042] 00:33:20.671 bw ( KiB/s): min=32400, max=33216, per=99.96%, avg=32852.00, stdev=341.01, samples=4 00:33:20.671 iops : min= 8100, max= 8304, avg=8213.00, stdev=85.25, samples=4 00:33:20.671 lat (msec) : 2=0.01%, 4=0.10%, 10=99.17%, 20=0.73% 00:33:20.671 cpu : usr=74.63%, sys=24.38%, ctx=69, majf=0, minf=1539 00:33:20.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:20.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:20.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:20.671 issued rwts: total=16471,16490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:20.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:20.671 00:33:20.671 Run status group 0 (all jobs): 00:33:20.671 READ: bw=32.1MiB/s (33.6MB/s), 32.1MiB/s-32.1MiB/s (33.6MB/s-33.6MB/s), io=64.3MiB (67.5MB), run=2007-2007msec 00:33:20.671 WRITE: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=64.4MiB (67.5MB), run=2007-2007msec 00:33:20.932 ----------------------------------------------------- 00:33:20.932 Suppressions used: 00:33:20.932 count bytes template 00:33:20.932 1 58 /usr/src/fio/parse.c 00:33:20.932 1 8 libtcmalloc_minimal.so 00:33:20.932 ----------------------------------------------------- 00:33:20.932 00:33:20.932 11:45:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:21.192 11:45:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:21.192 11:45:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:24.488 11:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:24.747 11:45:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:25.318 11:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:25.318 11:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:27.860 rmmod nvme_tcp 00:33:27.860 rmmod nvme_fabrics 00:33:27.860 rmmod nvme_keyring 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2699122 ']' 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2699122 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2699122 ']' 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2699122 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2699122 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2699122' 00:33:27.860 killing process with pid 2699122 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2699122 00:33:27.860 11:45:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2699122 00:33:28.432 11:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:28.432 11:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:28.433 11:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:28.433 11:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:28.433 11:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:28.433 11:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:28.433 11:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:28.433 11:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:28.433 11:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:28.433 11:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.433 11:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:28.433 11:45:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.982 11:45:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:30.982 00:33:30.982 real 0m38.250s 00:33:30.982 user 3m3.485s 00:33:30.982 sys 0m12.667s 00:33:30.982 11:45:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.982 11:45:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.982 ************************************ 00:33:30.982 END TEST nvmf_fio_host 00:33:30.982 ************************************ 00:33:30.982 11:45:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:30.982 11:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:30.982 11:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.982 11:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.982 ************************************ 00:33:30.982 START TEST nvmf_failover 00:33:30.982 ************************************ 00:33:30.982 11:45:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:30.982 * Looking for test storage... 00:33:30.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:30.982 11:45:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:30.982 11:45:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:33:30.982 11:45:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:30.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.982 --rc genhtml_branch_coverage=1 00:33:30.982 --rc genhtml_function_coverage=1 00:33:30.982 --rc genhtml_legend=1 00:33:30.982 --rc geninfo_all_blocks=1 00:33:30.982 --rc geninfo_unexecuted_blocks=1 00:33:30.982 00:33:30.982 ' 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:30.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.982 --rc genhtml_branch_coverage=1 00:33:30.982 --rc genhtml_function_coverage=1 00:33:30.982 --rc genhtml_legend=1 00:33:30.982 --rc geninfo_all_blocks=1 00:33:30.982 --rc geninfo_unexecuted_blocks=1 00:33:30.982 00:33:30.982 ' 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:30.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.982 --rc genhtml_branch_coverage=1 00:33:30.982 --rc genhtml_function_coverage=1 00:33:30.982 --rc genhtml_legend=1 00:33:30.982 --rc geninfo_all_blocks=1 00:33:30.982 --rc geninfo_unexecuted_blocks=1 00:33:30.982 00:33:30.982 ' 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:30.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.982 --rc genhtml_branch_coverage=1 00:33:30.982 --rc genhtml_function_coverage=1 00:33:30.982 --rc genhtml_legend=1 00:33:30.982 --rc geninfo_all_blocks=1 00:33:30.982 --rc geninfo_unexecuted_blocks=1 00:33:30.982 00:33:30.982 ' 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:30.982 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:30.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:30.983 11:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:39.142 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:39.142 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:39.142 Found net devices under 0000:31:00.0: cvl_0_0 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:39.142 Found net devices under 0000:31:00.1: cvl_0_1 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:39.142 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:39.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:33:39.143 00:33:39.143 --- 10.0.0.2 ping statistics --- 00:33:39.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.143 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:33:39.143 00:33:39.143 --- 10.0.0.1 ping statistics --- 00:33:39.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.143 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2710063 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2710063 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2710063 ']' 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:39.143 11:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:39.143 [2024-12-07 11:45:37.478959] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:33:39.143 [2024-12-07 11:45:37.479073] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.143 [2024-12-07 11:45:37.630047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:39.143 [2024-12-07 11:45:37.729605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.143 [2024-12-07 11:45:37.729651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.143 [2024-12-07 11:45:37.729663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.143 [2024-12-07 11:45:37.729674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.143 [2024-12-07 11:45:37.729684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.143 [2024-12-07 11:45:37.731734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:39.143 [2024-12-07 11:45:37.731854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.143 [2024-12-07 11:45:37.731878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:39.143 11:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:39.143 11:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:39.143 11:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:39.143 11:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:39.143 11:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:39.143 11:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:39.143 11:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:39.143 [2024-12-07 11:45:38.433566] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:39.143 11:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:39.403 Malloc0 00:33:39.403 11:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:39.663 11:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:39.923 11:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.923 [2024-12-07 11:45:39.237378] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.923 11:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:40.183 [2024-12-07 11:45:39.421900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:40.183 11:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:40.481 [2024-12-07 11:45:39.594439] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:40.481 11:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:40.481 11:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2710465 00:33:40.481 11:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:40.481 11:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2710465 /var/tmp/bdevperf.sock 00:33:40.481 11:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2710465 ']' 00:33:40.481 11:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:40.481 11:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:40.481 11:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:40.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:40.481 11:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:40.481 11:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:41.422 11:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:41.422 11:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:41.422 11:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:41.422 NVMe0n1 00:33:41.422 11:45:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:41.992 00:33:41.992 11:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2710808 00:33:41.992 11:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:41.992 11:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:42.934 11:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:43.196 [2024-12-07 11:45:42.294792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.294995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.196 [2024-12-07 11:45:42.295179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.197 [2024-12-07 11:45:42.295186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.197 [2024-12-07 11:45:42.295192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.197 [2024-12-07 11:45:42.295198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.197 [2024-12-07 11:45:42.295205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:43.197 11:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:46.496 11:45:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:46.496 00:33:46.496 11:45:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:46.755 [2024-12-07 11:45:45.886500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 [2024-12-07 11:45:45.886647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:46.755 11:45:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:50.055 11:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.055 [2024-12-07 11:45:49.078114] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.055 11:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:50.995 11:45:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:50.995 [2024-12-07 11:45:50.266124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:50.995 [2024-12-07 11:45:50.266177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:50.995 [2024-12-07 11:45:50.266185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:50.996 [2024-12-07 11:45:50.266192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:50.996 [2024-12-07 11:45:50.266198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:50.996 [2024-12-07 11:45:50.266205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:50.996 [2024-12-07 11:45:50.266211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:50.996 [2024-12-07 11:45:50.266218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:50.996 [2024-12-07 11:45:50.266224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:50.996 [2024-12-07 11:45:50.266230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:50.996 [2024-12-07 11:45:50.266236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:50.996 [2024-12-07 11:45:50.266243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:50.996 [2024-12-07 11:45:50.266249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:50.996 11:45:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2710808 00:33:57.575 { 00:33:57.575 "results": [ 00:33:57.575 { 00:33:57.576 "job": "NVMe0n1", 00:33:57.576 "core_mask": "0x1", 00:33:57.576 "workload": "verify", 00:33:57.576 "status": "finished", 00:33:57.576 "verify_range": { 00:33:57.576 "start": 0, 00:33:57.576 "length": 16384 00:33:57.576 }, 00:33:57.576 "queue_depth": 128, 00:33:57.576 "io_size": 4096, 00:33:57.576 "runtime": 15.01159, 00:33:57.576 "iops": 10005.935413903524, 00:33:57.576 "mibps": 39.08568521056064, 00:33:57.576 "io_failed": 10189, 00:33:57.576 "io_timeout": 0, 00:33:57.576 "avg_latency_us": 11950.054856582332, 00:33:57.576 "min_latency_us": 610.9866666666667, 00:33:57.576 "max_latency_us": 21080.746666666666 00:33:57.576 } 00:33:57.576 ], 00:33:57.576 "core_count": 1 00:33:57.576 } 00:33:57.576 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2710465 00:33:57.576 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2710465 ']' 00:33:57.576 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2710465 00:33:57.576 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:57.576 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:57.576 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2710465 00:33:57.576 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:57.576 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:57.576 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2710465' 00:33:57.576 killing process with pid 2710465 00:33:57.576 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2710465 00:33:57.576 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2710465 00:33:57.845 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:57.845 [2024-12-07 11:45:39.701547] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:33:57.845 [2024-12-07 11:45:39.701657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2710465 ] 00:33:57.845 [2024-12-07 11:45:39.827475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.845 [2024-12-07 11:45:39.925347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.845 Running I/O for 15 seconds... 00:33:57.845 9901.00 IOPS, 38.68 MiB/s [2024-12-07T10:45:57.199Z] [2024-12-07 11:45:42.296661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-12-07 11:45:42.296705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.845 [2024-12-07 11:45:42.296730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-12-07 11:45:42.296744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.296758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.846 [2024-12-07 11:45:42.296769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.296783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.846 [2024-12-07 11:45:42.296794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.296807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.846 [2024-12-07 11:45:42.296818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.296830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.846 [2024-12-07 11:45:42.296842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.296855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.846 [2024-12-07 11:45:42.296865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.296878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.846 [2024-12-07 11:45:42.296889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.296903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.296913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.296926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.296938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.296951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.296961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.296980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.296991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.846 [2024-12-07 11:45:42.297520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.846 [2024-12-07 11:45:42.297532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.847 [2024-12-07 11:45:42.297903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.297926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.297950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.297974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.297987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.297998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.298015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.298026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.298039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.298049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.298061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.298071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.298084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.298095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.298107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.298117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.298130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.298141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.298153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.298163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.298176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.298188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.298201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.298211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.298223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.298234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.298246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.847 [2024-12-07 11:45:42.298257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.847 [2024-12-07 11:45:42.298270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.848 [2024-12-07 11:45:42.298281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.298985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.848 [2024-12-07 11:45:42.298995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.848 [2024-12-07 11:45:42.299008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.849 [2024-12-07 11:45:42.299702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.849 [2024-12-07 11:45:42.299713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:42.299726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.850 [2024-12-07 11:45:42.299736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:42.299764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.850 [2024-12-07 11:45:42.299776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.850 [2024-12-07 11:45:42.299788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85664 len:8 PRP1 0x0 PRP2 0x0 00:33:57.850 [2024-12-07 11:45:42.299802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:42.300021] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:57.850 [2024-12-07 11:45:42.300060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.850 [2024-12-07 11:45:42.300073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:42.300087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.850 [2024-12-07 11:45:42.300098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:42.300109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.850 [2024-12-07 11:45:42.300119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:42.300130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.850 [2024-12-07 11:45:42.300141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:42.300152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:57.850 [2024-12-07 11:45:42.303920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:57.850 [2024-12-07 11:45:42.303965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039dd00 (9): Bad file descriptor 00:33:57.850 [2024-12-07 11:45:42.382934] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:57.850 9555.50 IOPS, 37.33 MiB/s [2024-12-07T10:45:57.204Z] 9722.33 IOPS, 37.98 MiB/s [2024-12-07T10:45:57.204Z] 9794.25 IOPS, 38.26 MiB/s [2024-12-07T10:45:57.204Z] [2024-12-07 11:45:45.886808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.850 [2024-12-07 11:45:45.886860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.886895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.886909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.886927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.886938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.886951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.886962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.886975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.886985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.886998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.850 [2024-12-07 11:45:45.887381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.850 [2024-12-07 11:45:45.887391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.851 [2024-12-07 11:45:45.887961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.851 [2024-12-07 11:45:45.887974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.887984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.887997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.852 [2024-12-07 11:45:45.888491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.852 [2024-12-07 11:45:45.888514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.852 [2024-12-07 11:45:45.888538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.852 [2024-12-07 11:45:45.888561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.852 [2024-12-07 11:45:45.888585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.852 [2024-12-07 11:45:45.888607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.852 [2024-12-07 11:45:45.888630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.852 [2024-12-07 11:45:45.888712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.852 [2024-12-07 11:45:45.888725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.888737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.888747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.888760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.888771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.888784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.888794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.888806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.888817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.888830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.888840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.888853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.888863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.888876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.888886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.888899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.888911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.888924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.888934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.888947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.888957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.888971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.888981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.888993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.889003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.889022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.889033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.889046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.889056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.889068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.853 [2024-12-07 11:45:45.889080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.889109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.853 [2024-12-07 11:45:45.889121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127784 len:8 PRP1 0x0 PRP2 0x0 00:33:57.853 [2024-12-07 11:45:45.889140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.889154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.853 [2024-12-07 11:45:45.889164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.853 [2024-12-07 11:45:45.889174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127792 len:8 PRP1 0x0 PRP2 0x0 00:33:57.853 [2024-12-07 11:45:45.889186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.889197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.853 [2024-12-07 11:45:45.889205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.853 [2024-12-07 11:45:45.889214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127800 len:8 PRP1 0x0 PRP2 0x0 00:33:57.853 [2024-12-07 11:45:45.889224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.889235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.853 [2024-12-07 11:45:45.889243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.853 [2024-12-07 11:45:45.889252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127808 len:8 PRP1 0x0 PRP2 0x0 00:33:57.853 [2024-12-07 11:45:45.889263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.889273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.853 [2024-12-07 11:45:45.889280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.853 [2024-12-07 11:45:45.889290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127816 len:8 PRP1 0x0 PRP2 0x0 00:33:57.853 [2024-12-07 11:45:45.889301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.889311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.853 [2024-12-07 11:45:45.889319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.853 [2024-12-07 11:45:45.889328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127824 len:8 PRP1 0x0 PRP2 0x0 00:33:57.853 [2024-12-07 11:45:45.889339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.889352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.853 [2024-12-07 11:45:45.889360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.853 [2024-12-07 11:45:45.889370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127832 len:8 PRP1 0x0 PRP2 0x0 00:33:57.853 [2024-12-07 11:45:45.889380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.889391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.853 [2024-12-07 11:45:45.889399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.853 [2024-12-07 11:45:45.889410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127840 len:8 PRP1 0x0 PRP2 0x0 00:33:57.853 [2024-12-07 11:45:45.889420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.889430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.853 [2024-12-07 11:45:45.889438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.853 [2024-12-07 11:45:45.889447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127848 len:8 PRP1 0x0 PRP2 0x0 00:33:57.853 [2024-12-07 11:45:45.889458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.853 [2024-12-07 11:45:45.889468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.853 [2024-12-07 11:45:45.889476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127856 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.889505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.889514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127864 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.889543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.889551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127872 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.889580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.889587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127880 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.889617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.889625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127888 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.889656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.889665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127896 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.889693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.889701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127904 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.889731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.889739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127912 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.889769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.889776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127920 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.889807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.889815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127928 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.889845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.889853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127936 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.889883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.889891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127944 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.889921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.889929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127040 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.889962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.889971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.889980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127048 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.889990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.890000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.890008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.890023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127056 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.890033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.890043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.890051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.890061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127064 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.890071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.890081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.890089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.890098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127072 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.890108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.890118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.890126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.890136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127080 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.890146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.890155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.854 [2024-12-07 11:45:45.890164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.854 [2024-12-07 11:45:45.890173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127088 len:8 PRP1 0x0 PRP2 0x0 00:33:57.854 [2024-12-07 11:45:45.890183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.854 [2024-12-07 11:45:45.890193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.855 [2024-12-07 11:45:45.890201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.855 [2024-12-07 11:45:45.890210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127096 len:8 PRP1 0x0 PRP2 0x0 00:33:57.855 [2024-12-07 11:45:45.890221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:45.890232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.855 [2024-12-07 11:45:45.890241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.855 [2024-12-07 11:45:45.890250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127952 len:8 PRP1 0x0 PRP2 0x0 00:33:57.855 [2024-12-07 11:45:45.890260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:45.890270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.855 [2024-12-07 11:45:45.890278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.855 [2024-12-07 11:45:45.890288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127960 len:8 PRP1 0x0 PRP2 0x0 00:33:57.855 [2024-12-07 11:45:45.890298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:45.890308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.855 [2024-12-07 11:45:45.890315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.855 [2024-12-07 11:45:45.890324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127968 len:8 PRP1 0x0 PRP2 0x0 00:33:57.855 [2024-12-07 11:45:45.890335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:45.890345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.855 [2024-12-07 11:45:45.890353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.855 [2024-12-07 11:45:45.890362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127976 len:8 PRP1 0x0 PRP2 0x0 00:33:57.855 [2024-12-07 11:45:45.890397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:45.900588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.855 [2024-12-07 11:45:45.900626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.855 [2024-12-07 11:45:45.900642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127984 len:8 PRP1 0x0 PRP2 0x0 00:33:57.855 [2024-12-07 11:45:45.900656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:45.900668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.855 [2024-12-07 11:45:45.900677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.855 [2024-12-07 11:45:45.900687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127992 len:8 PRP1 0x0 PRP2 0x0 00:33:57.855 [2024-12-07 11:45:45.900698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:45.900913] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:57.855 [2024-12-07 11:45:45.900955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.855 [2024-12-07 11:45:45.900969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:45.900984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.855 [2024-12-07 11:45:45.900995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:45.901007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.855 [2024-12-07 11:45:45.901031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:45.901043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.855 [2024-12-07 11:45:45.901055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:45.901066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:57.855 [2024-12-07 11:45:45.901127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039dd00 (9): Bad file descriptor 00:33:57.855 [2024-12-07 11:45:45.904842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:57.855 [2024-12-07 11:45:45.939426] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:57.855 9778.80 IOPS, 38.20 MiB/s [2024-12-07T10:45:57.209Z] 9870.67 IOPS, 38.56 MiB/s [2024-12-07T10:45:57.209Z] 9927.29 IOPS, 38.78 MiB/s [2024-12-07T10:45:57.209Z] 9959.12 IOPS, 38.90 MiB/s [2024-12-07T10:45:57.209Z] 9960.33 IOPS, 38.91 MiB/s [2024-12-07T10:45:57.209Z] [2024-12-07 11:45:50.266636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.855 [2024-12-07 11:45:50.266680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:50.266703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.855 [2024-12-07 11:45:50.266715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:50.266729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.855 [2024-12-07 11:45:50.266739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:50.266752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.855 [2024-12-07 11:45:50.266763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:50.266775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.855 [2024-12-07 11:45:50.266786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:50.266799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.855 [2024-12-07 11:45:50.266809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:50.266822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.855 [2024-12-07 11:45:50.266833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:50.266846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.855 [2024-12-07 11:45:50.266856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:50.266869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.855 [2024-12-07 11:45:50.266879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:50.266899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.855 [2024-12-07 11:45:50.266910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:50.266923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.855 [2024-12-07 11:45:50.266933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.855 [2024-12-07 11:45:50.266947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.855 [2024-12-07 11:45:50.266957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.266970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.266980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.266993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.856 [2024-12-07 11:45:50.267678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.856 [2024-12-07 11:45:50.267696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.267709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.267720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.267733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.267743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.267756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.267766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.267779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.267791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.267804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.267816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.267829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.267840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.267853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.267864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.267877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.267887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.267899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.267910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.267923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.267933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.267946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.267956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.267978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.267988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.268015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.268039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.857 [2024-12-07 11:45:50.268062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-12-07 11:45:50.268088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-12-07 11:45:50.268111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-12-07 11:45:50.268137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-12-07 11:45:50.268160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-12-07 11:45:50.268184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-12-07 11:45:50.268207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-12-07 11:45:50.268230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-12-07 11:45:50.268255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-12-07 11:45:50.268278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-12-07 11:45:50.268302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.857 [2024-12-07 11:45:50.268315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.857 [2024-12-07 11:45:50.268326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.268984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.268997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.269008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.269024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.269038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.269050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.269061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.269073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.269085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.858 [2024-12-07 11:45:50.269098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.858 [2024-12-07 11:45:50.269108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.859 [2024-12-07 11:45:50.269514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.859 [2024-12-07 11:45:50.269557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99176 len:8 PRP1 0x0 PRP2 0x0 00:33:57.859 [2024-12-07 11:45:50.269568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.859 [2024-12-07 11:45:50.269593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.859 [2024-12-07 11:45:50.269603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99184 len:8 PRP1 0x0 PRP2 0x0 00:33:57.859 [2024-12-07 11:45:50.269615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.859 [2024-12-07 11:45:50.269633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.859 [2024-12-07 11:45:50.269642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99192 len:8 PRP1 0x0 PRP2 0x0 00:33:57.859 [2024-12-07 11:45:50.269653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.859 [2024-12-07 11:45:50.269673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.859 [2024-12-07 11:45:50.269682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98632 len:8 PRP1 0x0 PRP2 0x0 00:33:57.859 [2024-12-07 11:45:50.269692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.859 [2024-12-07 11:45:50.269711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.859 [2024-12-07 11:45:50.269720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98640 len:8 PRP1 0x0 PRP2 0x0 00:33:57.859 [2024-12-07 11:45:50.269731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.859 [2024-12-07 11:45:50.269749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.859 [2024-12-07 11:45:50.269762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98648 len:8 PRP1 0x0 PRP2 0x0 00:33:57.859 [2024-12-07 11:45:50.269772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.859 [2024-12-07 11:45:50.269790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.859 [2024-12-07 11:45:50.269799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98656 len:8 PRP1 0x0 PRP2 0x0 00:33:57.859 [2024-12-07 11:45:50.269809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.859 [2024-12-07 11:45:50.269827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.859 [2024-12-07 11:45:50.269837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98664 len:8 PRP1 0x0 PRP2 0x0 00:33:57.859 [2024-12-07 11:45:50.269848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.859 [2024-12-07 11:45:50.269859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.859 [2024-12-07 11:45:50.269866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.859 [2024-12-07 11:45:50.269876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98672 len:8 PRP1 0x0 PRP2 0x0 00:33:57.860 [2024-12-07 11:45:50.269886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.860 [2024-12-07 11:45:50.269896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.860 [2024-12-07 11:45:50.269904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.860 [2024-12-07 11:45:50.269914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98680 len:8 PRP1 0x0 PRP2 0x0 00:33:57.860 [2024-12-07 11:45:50.269925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.860 [2024-12-07 11:45:50.270136] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:57.860 [2024-12-07 11:45:50.270172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.860 [2024-12-07 11:45:50.270189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.860 [2024-12-07 11:45:50.270201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.860 [2024-12-07 11:45:50.270212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.860 [2024-12-07 11:45:50.270224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.860 [2024-12-07 11:45:50.270234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.860 [2024-12-07 11:45:50.270245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.860 [2024-12-07 11:45:50.270255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.860 [2024-12-07 11:45:50.270266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:57.860 [2024-12-07 11:45:50.270316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039dd00 (9): Bad file descriptor 00:33:57.860 [2024-12-07 11:45:50.274044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:57.860 [2024-12-07 11:45:50.434020] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:57.860 9856.60 IOPS, 38.50 MiB/s [2024-12-07T10:45:57.214Z] 9879.09 IOPS, 38.59 MiB/s [2024-12-07T10:45:57.214Z] 9928.83 IOPS, 38.78 MiB/s [2024-12-07T10:45:57.214Z] 9949.62 IOPS, 38.87 MiB/s [2024-12-07T10:45:57.214Z] 9995.07 IOPS, 39.04 MiB/s 00:33:57.860 Latency(us) 00:33:57.860 [2024-12-07T10:45:57.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:57.860 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:57.860 Verification LBA range: start 0x0 length 0x4000 00:33:57.860 NVMe0n1 : 15.01 10005.94 39.09 678.74 0.00 11950.05 610.99 21080.75 00:33:57.860 [2024-12-07T10:45:57.214Z] =================================================================================================================== 00:33:57.860 [2024-12-07T10:45:57.214Z] Total : 10005.94 39.09 678.74 0.00 11950.05 610.99 21080.75 00:33:57.860 Received shutdown signal, test time was about 15.000000 seconds 00:33:57.860 00:33:57.860 Latency(us) 00:33:57.860 [2024-12-07T10:45:57.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:57.860 [2024-12-07T10:45:57.214Z] =================================================================================================================== 00:33:57.860 [2024-12-07T10:45:57.214Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:57.860 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:57.860 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:57.860 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:57.860 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2713797 00:33:57.860 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2713797 /var/tmp/bdevperf.sock 00:33:57.860 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:57.860 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2713797 ']' 00:33:57.860 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:57.860 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:57.860 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:57.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:57.860 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:57.860 11:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:58.837 11:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:58.837 11:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:58.837 11:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:58.837 [2024-12-07 11:45:57.986549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:58.837 11:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:58.837 [2024-12-07 11:45:58.171033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:59.132 11:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:59.132 NVMe0n1 00:33:59.132 11:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:59.431 00:33:59.431 11:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:00.001 00:34:00.001 11:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:00.001 11:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:00.001 11:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:00.260 11:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:03.557 11:46:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:03.557 11:46:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:03.557 11:46:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2714846 00:34:03.557 11:46:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:03.557 11:46:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2714846 00:34:04.497 { 00:34:04.497 "results": [ 00:34:04.497 { 00:34:04.497 "job": "NVMe0n1", 00:34:04.497 "core_mask": "0x1", 00:34:04.497 "workload": "verify", 00:34:04.497 "status": "finished", 00:34:04.497 "verify_range": { 00:34:04.497 "start": 0, 00:34:04.497 "length": 16384 00:34:04.497 }, 00:34:04.497 "queue_depth": 128, 00:34:04.497 "io_size": 4096, 00:34:04.497 "runtime": 1.00741, 00:34:04.497 "iops": 10150.782700191581, 00:34:04.497 "mibps": 39.651494922623364, 00:34:04.497 "io_failed": 0, 00:34:04.497 "io_timeout": 0, 00:34:04.497 "avg_latency_us": 12547.028807614577, 00:34:04.497 "min_latency_us": 1454.08, 00:34:04.497 "max_latency_us": 11468.8 00:34:04.497 } 00:34:04.497 ], 00:34:04.497 "core_count": 1 00:34:04.497 } 00:34:04.497 11:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:04.497 [2024-12-07 11:45:57.078006] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:34:04.497 [2024-12-07 11:45:57.078124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2713797 ] 00:34:04.497 [2024-12-07 11:45:57.202908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.497 [2024-12-07 11:45:57.300432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.497 [2024-12-07 11:45:59.441524] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:04.497 [2024-12-07 11:45:59.441594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.497 [2024-12-07 11:45:59.441612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.497 [2024-12-07 11:45:59.441628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.497 [2024-12-07 11:45:59.441640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.497 [2024-12-07 11:45:59.441651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.498 [2024-12-07 11:45:59.441661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.498 [2024-12-07 11:45:59.441673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.498 [2024-12-07 11:45:59.441683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.498 [2024-12-07 11:45:59.441694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:34:04.498 [2024-12-07 11:45:59.441753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:34:04.498 [2024-12-07 11:45:59.441782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039dd00 (9): Bad file descriptor 00:34:04.498 [2024-12-07 11:45:59.492188] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:34:04.498 Running I/O for 1 seconds... 00:34:04.498 10096.00 IOPS, 39.44 MiB/s 00:34:04.498 Latency(us) 00:34:04.498 [2024-12-07T10:46:03.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.498 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:04.498 Verification LBA range: start 0x0 length 0x4000 00:34:04.498 NVMe0n1 : 1.01 10150.78 39.65 0.00 0.00 12547.03 1454.08 11468.80 00:34:04.498 [2024-12-07T10:46:03.852Z] =================================================================================================================== 00:34:04.498 [2024-12-07T10:46:03.852Z] Total : 10150.78 39.65 0.00 0.00 12547.03 1454.08 11468.80 00:34:04.498 11:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:04.498 11:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:04.758 11:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:05.018 11:46:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:05.018 11:46:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:05.018 11:46:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:05.279 11:46:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:08.576 11:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:08.576 11:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:08.576 11:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2713797 00:34:08.576 11:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2713797 ']' 00:34:08.576 11:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2713797 00:34:08.576 11:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:08.576 11:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:08.576 11:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2713797 00:34:08.576 11:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:08.576 11:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:08.576 11:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2713797' 00:34:08.576 killing process with pid 2713797 00:34:08.576 11:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2713797 00:34:08.576 11:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2713797 00:34:09.145 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:09.145 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.407 rmmod nvme_tcp 00:34:09.407 rmmod nvme_fabrics 00:34:09.407 rmmod nvme_keyring 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2710063 ']' 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2710063 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2710063 ']' 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2710063 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2710063 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2710063' 00:34:09.407 killing process with pid 2710063 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2710063 00:34:09.407 11:46:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2710063 00:34:10.350 11:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:10.350 11:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:10.350 11:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:10.350 11:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:34:10.350 11:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:34:10.350 11:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:10.350 11:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:34:10.350 11:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.350 11:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.350 11:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.350 11:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.350 11:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.269 11:46:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:12.269 00:34:12.269 real 0m41.632s 00:34:12.269 user 2m8.406s 00:34:12.269 sys 0m8.730s 00:34:12.269 11:46:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:12.269 11:46:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:12.269 ************************************ 00:34:12.269 END TEST nvmf_failover 00:34:12.269 ************************************ 00:34:12.269 11:46:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:12.269 11:46:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:12.269 11:46:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:12.269 11:46:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.269 ************************************ 00:34:12.269 START TEST nvmf_host_discovery 00:34:12.269 ************************************ 00:34:12.269 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:12.530 * Looking for test storage... 00:34:12.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:12.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.531 --rc genhtml_branch_coverage=1 00:34:12.531 --rc genhtml_function_coverage=1 00:34:12.531 --rc genhtml_legend=1 00:34:12.531 --rc geninfo_all_blocks=1 00:34:12.531 --rc geninfo_unexecuted_blocks=1 00:34:12.531 00:34:12.531 ' 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:12.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.531 --rc genhtml_branch_coverage=1 00:34:12.531 --rc genhtml_function_coverage=1 00:34:12.531 --rc genhtml_legend=1 00:34:12.531 --rc geninfo_all_blocks=1 00:34:12.531 --rc geninfo_unexecuted_blocks=1 00:34:12.531 00:34:12.531 ' 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:12.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.531 --rc genhtml_branch_coverage=1 00:34:12.531 --rc genhtml_function_coverage=1 00:34:12.531 --rc genhtml_legend=1 00:34:12.531 --rc geninfo_all_blocks=1 00:34:12.531 --rc geninfo_unexecuted_blocks=1 00:34:12.531 00:34:12.531 ' 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:12.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.531 --rc genhtml_branch_coverage=1 00:34:12.531 --rc genhtml_function_coverage=1 00:34:12.531 --rc genhtml_legend=1 00:34:12.531 --rc geninfo_all_blocks=1 00:34:12.531 --rc geninfo_unexecuted_blocks=1 00:34:12.531 00:34:12.531 ' 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.531 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:12.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:34:12.532 11:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:20.673 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:20.673 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:20.673 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:20.674 Found net devices under 0000:31:00.0: cvl_0_0 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:20.674 Found net devices under 0000:31:00.1: cvl_0_1 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:20.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:20.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:34:20.674 00:34:20.674 --- 10.0.0.2 ping statistics --- 00:34:20.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.674 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:20.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:20.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:34:20.674 00:34:20.674 --- 10.0.0.1 ping statistics --- 00:34:20.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.674 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2720263 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2720263 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2720263 ']' 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:20.674 11:46:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.674 [2024-12-07 11:46:19.100328] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:34:20.674 [2024-12-07 11:46:19.100456] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.674 [2024-12-07 11:46:19.255411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.674 [2024-12-07 11:46:19.353751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.674 [2024-12-07 11:46:19.353793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.674 [2024-12-07 11:46:19.353804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.674 [2024-12-07 11:46:19.353816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.674 [2024-12-07 11:46:19.353826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.674 [2024-12-07 11:46:19.355017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.674 [2024-12-07 11:46:19.894355] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.674 [2024-12-07 11:46:19.902568] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:20.674 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.675 null0 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.675 null1 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2720431 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2720431 /tmp/host.sock 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2720431 ']' 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:20.675 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:20.675 11:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.936 [2024-12-07 11:46:20.025146] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:34:20.937 [2024-12-07 11:46:20.025285] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2720431 ] 00:34:20.937 [2024-12-07 11:46:20.166893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.937 [2024-12-07 11:46:20.269893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.508 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:21.767 11:46:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:21.767 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.768 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:21.768 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.768 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:21.768 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:21.768 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.768 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.027 [2024-12-07 11:46:21.129784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:34:22.027 11:46:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:22.597 [2024-12-07 11:46:21.854242] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:22.597 [2024-12-07 11:46:21.854277] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:22.597 [2024-12-07 11:46:21.854304] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:22.597 [2024-12-07 11:46:21.941575] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:22.857 [2024-12-07 11:46:22.001511] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:22.857 [2024-12-07 11:46:22.003116] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500039e700:1 started. 00:34:22.857 [2024-12-07 11:46:22.005117] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:22.857 [2024-12-07 11:46:22.005142] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:22.857 [2024-12-07 11:46:22.012755] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500039e700 was disconnected and freed. delete nvme_qpair. 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:23.117 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.377 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:23.378 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.378 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:23.378 [2024-12-07 11:46:22.699528] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500039e980:1 started. 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.637 [2024-12-07 11:46:22.745196] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500039e980 was disconnected and freed. delete nvme_qpair. 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.637 [2024-12-07 11:46:22.790543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:23.637 [2024-12-07 11:46:22.791102] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:23.637 [2024-12-07 11:46:22.791136] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.637 [2024-12-07 11:46:22.878824] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:23.637 11:46:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:23.637 [2024-12-07 11:46:22.979018] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:34:23.638 [2024-12-07 11:46:22.979084] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:23.638 [2024-12-07 11:46:22.979101] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:23.638 [2024-12-07 11:46:22.979120] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:25.020 11:46:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.020 [2024-12-07 11:46:24.050181] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:25.020 [2024-12-07 11:46:24.050214] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:25.020 [2024-12-07 11:46:24.054836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:25.020 [2024-12-07 11:46:24.054868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.020 [2024-12-07 11:46:24.054884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:25.020 [2024-12-07 11:46:24.054896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.020 [2024-12-07 11:46:24.054908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:25.020 [2024-12-07 11:46:24.054919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.020 [2024-12-07 11:46:24.054930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:25.020 [2024-12-07 11:46:24.054941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.020 [2024-12-07 11:46:24.054952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:25.020 [2024-12-07 11:46:24.064844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:34:25.020 [2024-12-07 11:46:24.074877] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:25.020 [2024-12-07 11:46:24.074906] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:25.020 [2024-12-07 11:46:24.074915] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:25.020 [2024-12-07 11:46:24.074924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:25.020 [2024-12-07 11:46:24.074957] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:25.020 [2024-12-07 11:46:24.075298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.020 [2024-12-07 11:46:24.075346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:34:25.020 [2024-12-07 11:46:24.075364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:34:25.020 [2024-12-07 11:46:24.075407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:34:25.020 [2024-12-07 11:46:24.075446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:25.020 [2024-12-07 11:46:24.075459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:25.020 [2024-12-07 11:46:24.075472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:25.020 [2024-12-07 11:46:24.075484] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:25.020 [2024-12-07 11:46:24.075494] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:25.020 [2024-12-07 11:46:24.075502] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:25.020 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.020 [2024-12-07 11:46:24.084995] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:25.020 [2024-12-07 11:46:24.085026] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:25.020 [2024-12-07 11:46:24.085034] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:25.020 [2024-12-07 11:46:24.085042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:25.020 [2024-12-07 11:46:24.085069] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:25.020 [2024-12-07 11:46:24.085413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.020 [2024-12-07 11:46:24.085435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:34:25.020 [2024-12-07 11:46:24.085447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:34:25.020 [2024-12-07 11:46:24.085465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:34:25.020 [2024-12-07 11:46:24.085496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:25.020 [2024-12-07 11:46:24.085508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:25.021 [2024-12-07 11:46:24.085519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:25.021 [2024-12-07 11:46:24.085528] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:25.021 [2024-12-07 11:46:24.085536] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:25.021 [2024-12-07 11:46:24.085543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:25.021 [2024-12-07 11:46:24.095103] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:25.021 [2024-12-07 11:46:24.095128] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:25.021 [2024-12-07 11:46:24.095136] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:25.021 [2024-12-07 11:46:24.095143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:25.021 [2024-12-07 11:46:24.095166] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:25.021 [2024-12-07 11:46:24.095520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.021 [2024-12-07 11:46:24.095541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:34:25.021 [2024-12-07 11:46:24.095557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:34:25.021 [2024-12-07 11:46:24.095574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:34:25.021 [2024-12-07 11:46:24.095616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:25.021 [2024-12-07 11:46:24.095630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:25.021 [2024-12-07 11:46:24.095641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:25.021 [2024-12-07 11:46:24.095650] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:25.021 [2024-12-07 11:46:24.095658] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:25.021 [2024-12-07 11:46:24.095665] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:25.021 [2024-12-07 11:46:24.105200] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:25.021 [2024-12-07 11:46:24.105222] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:25.021 [2024-12-07 11:46:24.105231] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:25.021 [2024-12-07 11:46:24.105240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:25.021 [2024-12-07 11:46:24.105268] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:25.021 [2024-12-07 11:46:24.105621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.021 [2024-12-07 11:46:24.105641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:34:25.021 [2024-12-07 11:46:24.105653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:34:25.021 [2024-12-07 11:46:24.105670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:34:25.021 [2024-12-07 11:46:24.105701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:25.021 [2024-12-07 11:46:24.105713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:25.021 [2024-12-07 11:46:24.105725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:25.021 [2024-12-07 11:46:24.105735] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:25.021 [2024-12-07 11:46:24.105742] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:25.021 [2024-12-07 11:46:24.105749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.021 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:25.021 [2024-12-07 11:46:24.115304] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:25.021 [2024-12-07 11:46:24.115327] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:25.021 [2024-12-07 11:46:24.115335] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:25.021 [2024-12-07 11:46:24.115342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:25.021 [2024-12-07 11:46:24.115364] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:25.021 [2024-12-07 11:46:24.115719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.021 [2024-12-07 11:46:24.115741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:34:25.021 [2024-12-07 11:46:24.115753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:34:25.021 [2024-12-07 11:46:24.115771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:34:25.021 [2024-12-07 11:46:24.115795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:25.021 [2024-12-07 11:46:24.115805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:25.021 [2024-12-07 11:46:24.115816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:25.021 [2024-12-07 11:46:24.115826] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:25.021 [2024-12-07 11:46:24.115833] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:25.021 [2024-12-07 11:46:24.115840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:25.021 [2024-12-07 11:46:24.125400] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:25.021 [2024-12-07 11:46:24.125426] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:25.021 [2024-12-07 11:46:24.125433] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:25.021 [2024-12-07 11:46:24.125441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:25.021 [2024-12-07 11:46:24.125471] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:25.021 [2024-12-07 11:46:24.125827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.021 [2024-12-07 11:46:24.125848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:34:25.021 [2024-12-07 11:46:24.125860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:34:25.021 [2024-12-07 11:46:24.125880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:34:25.021 [2024-12-07 11:46:24.125911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:25.021 [2024-12-07 11:46:24.125922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:25.021 [2024-12-07 11:46:24.125933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:25.021 [2024-12-07 11:46:24.125943] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:25.021 [2024-12-07 11:46:24.125951] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:25.021 [2024-12-07 11:46:24.125957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:25.021 [2024-12-07 11:46:24.135505] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:25.021 [2024-12-07 11:46:24.135526] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:25.021 [2024-12-07 11:46:24.135533] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:25.021 [2024-12-07 11:46:24.135540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:25.021 [2024-12-07 11:46:24.135562] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:25.021 [2024-12-07 11:46:24.135871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.021 [2024-12-07 11:46:24.135889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:34:25.021 [2024-12-07 11:46:24.135900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:34:25.021 [2024-12-07 11:46:24.135917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:34:25.021 [2024-12-07 11:46:24.135931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:25.021 [2024-12-07 11:46:24.135941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:25.021 [2024-12-07 11:46:24.135951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:25.021 [2024-12-07 11:46:24.135959] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:25.022 [2024-12-07 11:46:24.135967] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:25.022 [2024-12-07 11:46:24.135973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:25.022 [2024-12-07 11:46:24.145597] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:25.022 [2024-12-07 11:46:24.145620] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:25.022 [2024-12-07 11:46:24.145628] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:25.022 [2024-12-07 11:46:24.145635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:25.022 [2024-12-07 11:46:24.145658] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:25.022 [2024-12-07 11:46:24.146018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.022 [2024-12-07 11:46:24.146037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:34:25.022 [2024-12-07 11:46:24.146051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:34:25.022 [2024-12-07 11:46:24.146067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:34:25.022 [2024-12-07 11:46:24.146097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:25.022 [2024-12-07 11:46:24.146108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:25.022 [2024-12-07 11:46:24.146118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:25.022 [2024-12-07 11:46:24.146127] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:25.022 [2024-12-07 11:46:24.146135] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:25.022 [2024-12-07 11:46:24.146141] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.022 [2024-12-07 11:46:24.155692] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:25.022 [2024-12-07 11:46:24.155715] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:25.022 [2024-12-07 11:46:24.155722] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:25.022 [2024-12-07 11:46:24.155729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:25.022 [2024-12-07 11:46:24.155751] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:25.022 [2024-12-07 11:46:24.156105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.022 [2024-12-07 11:46:24.156124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:34:25.022 [2024-12-07 11:46:24.156135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:34:25.022 [2024-12-07 11:46:24.156151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:34:25.022 [2024-12-07 11:46:24.156174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:25.022 [2024-12-07 11:46:24.156184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:25.022 [2024-12-07 11:46:24.156194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:25.022 [2024-12-07 11:46:24.156203] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:25.022 [2024-12-07 11:46:24.156211] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:25.022 [2024-12-07 11:46:24.156218] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:25.022 [2024-12-07 11:46:24.165786] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:25.022 [2024-12-07 11:46:24.165807] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:25.022 [2024-12-07 11:46:24.165815] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:25.022 [2024-12-07 11:46:24.165822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:25.022 [2024-12-07 11:46:24.165845] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:25.022 [2024-12-07 11:46:24.166282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.022 [2024-12-07 11:46:24.166330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:34:25.022 [2024-12-07 11:46:24.166346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:34:25.022 [2024-12-07 11:46:24.166373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:34:25.022 [2024-12-07 11:46:24.166420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:25.022 [2024-12-07 11:46:24.166433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:25.022 [2024-12-07 11:46:24.166446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:25.022 [2024-12-07 11:46:24.166457] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:25.022 [2024-12-07 11:46:24.166466] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:25.022 [2024-12-07 11:46:24.166473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:25.022 [2024-12-07 11:46:24.175883] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:25.022 [2024-12-07 11:46:24.175910] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:25.022 [2024-12-07 11:46:24.175919] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:25.022 [2024-12-07 11:46:24.175926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:25.022 [2024-12-07 11:46:24.175958] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:25.022 [2024-12-07 11:46:24.176178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.022 [2024-12-07 11:46:24.176202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:34:25.022 [2024-12-07 11:46:24.176214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:34:25.022 [2024-12-07 11:46:24.176236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:34:25.022 [2024-12-07 11:46:24.176253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:25.022 [2024-12-07 11:46:24.176263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:25.022 [2024-12-07 11:46:24.176274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:25.022 [2024-12-07 11:46:24.176283] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:25.022 [2024-12-07 11:46:24.176291] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:25.022 [2024-12-07 11:46:24.176297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:25.022 [2024-12-07 11:46:24.180696] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:25.022 [2024-12-07 11:46:24.180729] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:34:25.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:25.963 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:25.964 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:25.964 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:25.964 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:25.964 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.964 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.223 11:46:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.601 [2024-12-07 11:46:26.537151] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:27.601 [2024-12-07 11:46:26.537177] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:27.601 [2024-12-07 11:46:26.537206] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:27.601 [2024-12-07 11:46:26.664642] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:27.601 [2024-12-07 11:46:26.727555] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:34:27.601 [2024-12-07 11:46:26.728711] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x61500039fb00:1 started. 00:34:27.601 [2024-12-07 11:46:26.731022] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:27.601 [2024-12-07 11:46:26.731061] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:27.601 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.601 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:27.601 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:27.601 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:27.601 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:27.601 [2024-12-07 11:46:26.734468] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x61500039fb00 was disconnected and freed. delete nvme_qpair. 00:34:27.601 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:27.601 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:27.601 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.602 request: 00:34:27.602 { 00:34:27.602 "name": "nvme", 00:34:27.602 "trtype": "tcp", 00:34:27.602 "traddr": "10.0.0.2", 00:34:27.602 "adrfam": "ipv4", 00:34:27.602 "trsvcid": "8009", 00:34:27.602 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:27.602 "wait_for_attach": true, 00:34:27.602 "method": "bdev_nvme_start_discovery", 00:34:27.602 "req_id": 1 00:34:27.602 } 00:34:27.602 Got JSON-RPC error response 00:34:27.602 response: 00:34:27.602 { 00:34:27.602 "code": -17, 00:34:27.602 "message": "File exists" 00:34:27.602 } 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.602 request: 00:34:27.602 { 00:34:27.602 "name": "nvme_second", 00:34:27.602 "trtype": "tcp", 00:34:27.602 "traddr": "10.0.0.2", 00:34:27.602 "adrfam": "ipv4", 00:34:27.602 "trsvcid": "8009", 00:34:27.602 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:27.602 "wait_for_attach": true, 00:34:27.602 "method": "bdev_nvme_start_discovery", 00:34:27.602 "req_id": 1 00:34:27.602 } 00:34:27.602 Got JSON-RPC error response 00:34:27.602 response: 00:34:27.602 { 00:34:27.602 "code": -17, 00:34:27.602 "message": "File exists" 00:34:27.602 } 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:27.602 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.863 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.863 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:27.863 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:27.863 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:27.863 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:27.863 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:27.863 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:27.863 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:27.863 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:27.863 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:27.863 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.863 11:46:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.802 [2024-12-07 11:46:27.986670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.802 [2024-12-07 11:46:27.986709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=8010 00:34:28.802 [2024-12-07 11:46:27.986760] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:28.802 [2024-12-07 11:46:27.986774] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:28.802 [2024-12-07 11:46:27.986786] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:29.742 [2024-12-07 11:46:28.988998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.742 [2024-12-07 11:46:28.989036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0280 with addr=10.0.0.2, port=8010 00:34:29.742 [2024-12-07 11:46:28.989077] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:29.742 [2024-12-07 11:46:28.989088] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:29.742 [2024-12-07 11:46:28.989098] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:30.684 [2024-12-07 11:46:29.990918] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:30.684 request: 00:34:30.684 { 00:34:30.684 "name": "nvme_second", 00:34:30.684 "trtype": "tcp", 00:34:30.684 "traddr": "10.0.0.2", 00:34:30.684 "adrfam": "ipv4", 00:34:30.684 "trsvcid": "8010", 00:34:30.684 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:30.684 "wait_for_attach": false, 00:34:30.684 "attach_timeout_ms": 3000, 00:34:30.684 "method": "bdev_nvme_start_discovery", 00:34:30.684 "req_id": 1 00:34:30.684 } 00:34:30.684 Got JSON-RPC error response 00:34:30.684 response: 00:34:30.684 { 00:34:30.684 "code": -110, 00:34:30.684 "message": "Connection timed out" 00:34:30.684 } 00:34:30.684 11:46:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:30.684 11:46:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:30.684 11:46:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:30.684 11:46:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:30.684 11:46:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:30.684 11:46:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:30.684 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:30.684 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:30.684 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:30.684 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.684 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.684 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:30.684 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2720431 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:30.946 rmmod nvme_tcp 00:34:30.946 rmmod nvme_fabrics 00:34:30.946 rmmod nvme_keyring 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2720263 ']' 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2720263 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2720263 ']' 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2720263 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2720263 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2720263' 00:34:30.946 killing process with pid 2720263 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2720263 00:34:30.946 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2720263 00:34:31.518 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:31.518 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:31.518 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:31.518 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:31.518 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:34:31.518 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:31.518 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:34:31.518 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:31.518 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:31.518 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.518 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:31.518 11:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.060 11:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:34.060 00:34:34.060 real 0m21.281s 00:34:34.060 user 0m25.944s 00:34:34.060 sys 0m7.121s 00:34:34.060 11:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:34.060 11:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.060 ************************************ 00:34:34.060 END TEST nvmf_host_discovery 00:34:34.060 ************************************ 00:34:34.060 11:46:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:34.060 11:46:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:34.060 11:46:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:34.060 11:46:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.060 ************************************ 00:34:34.060 START TEST nvmf_host_multipath_status 00:34:34.060 ************************************ 00:34:34.060 11:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:34.060 * Looking for test storage... 00:34:34.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:34.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.060 --rc genhtml_branch_coverage=1 00:34:34.060 --rc genhtml_function_coverage=1 00:34:34.060 --rc genhtml_legend=1 00:34:34.060 --rc geninfo_all_blocks=1 00:34:34.060 --rc geninfo_unexecuted_blocks=1 00:34:34.060 00:34:34.060 ' 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:34.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.060 --rc genhtml_branch_coverage=1 00:34:34.060 --rc genhtml_function_coverage=1 00:34:34.060 --rc genhtml_legend=1 00:34:34.060 --rc geninfo_all_blocks=1 00:34:34.060 --rc geninfo_unexecuted_blocks=1 00:34:34.060 00:34:34.060 ' 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:34.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.060 --rc genhtml_branch_coverage=1 00:34:34.060 --rc genhtml_function_coverage=1 00:34:34.060 --rc genhtml_legend=1 00:34:34.060 --rc geninfo_all_blocks=1 00:34:34.060 --rc geninfo_unexecuted_blocks=1 00:34:34.060 00:34:34.060 ' 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:34.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.060 --rc genhtml_branch_coverage=1 00:34:34.060 --rc genhtml_function_coverage=1 00:34:34.060 --rc genhtml_legend=1 00:34:34.060 --rc geninfo_all_blocks=1 00:34:34.060 --rc geninfo_unexecuted_blocks=1 00:34:34.060 00:34:34.060 ' 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.060 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:34.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:34.061 11:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:42.242 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:42.242 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:42.242 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:42.242 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:42.242 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:42.242 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:42.242 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:42.242 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:42.242 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:42.242 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:42.243 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:42.243 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:42.243 Found net devices under 0000:31:00.0: cvl_0_0 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:42.243 Found net devices under 0000:31:00.1: cvl_0_1 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:42.243 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:42.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:42.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:34:42.244 00:34:42.244 --- 10.0.0.2 ping statistics --- 00:34:42.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.244 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:42.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:42.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:34:42.244 00:34:42.244 --- 10.0.0.1 ping statistics --- 00:34:42.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.244 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2726869 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2726869 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2726869 ']' 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:42.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:42.244 11:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:42.244 [2024-12-07 11:46:40.688829] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:34:42.244 [2024-12-07 11:46:40.688966] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:42.244 [2024-12-07 11:46:40.841402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:42.244 [2024-12-07 11:46:40.941789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:42.244 [2024-12-07 11:46:40.941829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:42.244 [2024-12-07 11:46:40.941841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:42.244 [2024-12-07 11:46:40.941859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:42.244 [2024-12-07 11:46:40.941868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:42.244 [2024-12-07 11:46:40.943679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:42.244 [2024-12-07 11:46:40.943701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.244 11:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:42.244 11:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:42.244 11:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:42.244 11:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:42.244 11:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:42.244 11:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:42.244 11:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2726869 00:34:42.244 11:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:42.505 [2024-12-07 11:46:41.636570] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:42.505 11:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:42.766 Malloc0 00:34:42.766 11:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:42.766 11:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:43.026 11:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:43.287 [2024-12-07 11:46:42.414353] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.287 11:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:43.287 [2024-12-07 11:46:42.578773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:43.287 11:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2727235 00:34:43.287 11:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:43.287 11:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:43.287 11:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2727235 /var/tmp/bdevperf.sock 00:34:43.287 11:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2727235 ']' 00:34:43.287 11:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:43.287 11:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:43.287 11:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:43.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:43.287 11:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:43.287 11:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:44.230 11:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:44.230 11:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:44.230 11:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:44.490 11:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:44.752 Nvme0n1 00:34:44.752 11:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:45.322 Nvme0n1 00:34:45.322 11:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:45.322 11:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:47.240 11:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:47.240 11:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:47.499 11:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:47.499 11:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:48.877 11:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:48.877 11:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:48.877 11:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.877 11:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:48.877 11:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.877 11:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:48.877 11:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.877 11:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:48.877 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:48.877 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:48.877 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.877 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:49.136 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.136 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:49.136 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.136 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:49.396 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.396 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:49.396 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.396 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:49.396 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.396 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:49.396 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.396 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:49.655 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.655 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:49.655 11:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:49.914 11:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:49.914 11:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:51.291 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:51.291 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:51.291 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.291 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:51.291 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:51.291 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:51.291 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.291 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:51.291 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.291 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:51.291 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.291 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:51.550 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.550 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:51.550 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.550 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:51.809 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.809 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:51.809 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.809 11:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:51.809 11:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.809 11:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:51.810 11:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.810 11:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:52.069 11:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.069 11:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:52.069 11:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:52.327 11:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:52.327 11:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:53.717 11:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:53.717 11:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:53.717 11:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.717 11:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:53.717 11:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.717 11:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:53.717 11:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.717 11:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:53.717 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:53.717 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:53.717 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.717 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:53.976 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.976 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:53.976 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.976 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:54.236 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.236 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:54.236 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.236 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:54.236 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.236 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:54.515 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.515 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:54.515 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.515 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:54.515 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:54.822 11:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:54.822 11:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:55.893 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:55.893 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:55.893 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.893 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:56.154 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.154 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:56.154 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.154 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:56.154 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:56.154 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:56.154 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.154 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:56.414 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.414 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:56.414 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.414 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:56.674 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.674 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:56.674 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.674 11:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:56.674 11:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.674 11:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:56.674 11:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.674 11:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:56.934 11:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:56.934 11:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:56.934 11:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:57.194 11:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:57.453 11:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:58.391 11:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:58.391 11:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:58.391 11:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.391 11:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:58.391 11:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:58.391 11:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:58.649 11:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.650 11:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:58.650 11:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:58.650 11:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:58.650 11:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.650 11:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:58.908 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.908 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:58.908 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.908 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:59.167 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.167 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:59.167 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.167 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:59.167 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:59.167 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:59.167 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.167 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:59.426 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:59.426 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:59.426 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:59.685 11:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:59.685 11:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:01.064 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:01.064 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:01.064 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.064 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:01.064 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:01.064 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:01.064 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.064 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:01.064 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.064 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:01.064 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.064 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:01.323 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.323 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:01.323 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.323 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:01.582 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.582 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:01.582 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:01.582 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.842 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:01.842 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:01.842 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.842 11:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:01.842 11:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.842 11:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:02.101 11:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:02.101 11:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:02.360 11:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:02.360 11:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:03.739 11:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:03.739 11:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:03.739 11:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.740 11:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:03.740 11:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.740 11:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:03.740 11:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.740 11:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:03.740 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.740 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:03.740 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.740 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:04.000 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.000 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:04.000 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.000 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:04.000 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.000 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:04.260 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.261 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:04.261 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.261 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:04.261 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.261 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:04.520 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.520 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:04.520 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:04.780 11:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:04.780 11:47:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:06.160 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:06.160 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:06.160 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.160 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:06.160 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:06.160 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:06.160 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.160 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:06.160 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.160 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:06.160 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.160 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:06.419 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.419 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:06.419 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.419 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:06.678 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.678 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:06.678 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.678 11:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:06.937 11:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.937 11:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:06.937 11:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.937 11:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:06.937 11:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.937 11:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:06.938 11:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:07.197 11:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:07.456 11:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:08.395 11:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:08.395 11:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:08.395 11:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.395 11:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:08.654 11:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.654 11:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:08.654 11:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.654 11:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:08.654 11:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.654 11:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:08.654 11:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.654 11:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:08.915 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.915 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:08.915 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.915 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:09.176 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.176 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:09.176 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.176 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:09.176 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.176 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:09.176 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.176 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:09.435 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.436 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:09.436 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:09.697 11:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:09.956 11:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:10.897 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:10.897 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:10.897 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.897 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:11.157 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.157 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:11.157 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.157 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:11.157 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:11.157 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:11.157 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.157 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:11.417 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.417 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:11.417 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.417 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:11.679 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.679 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:11.679 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.679 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:11.679 11:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.679 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:11.679 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.679 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:11.940 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:11.940 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2727235 00:35:11.940 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2727235 ']' 00:35:11.940 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2727235 00:35:11.940 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:11.940 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.940 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2727235 00:35:11.940 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:35:11.940 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:35:11.940 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2727235' 00:35:11.940 killing process with pid 2727235 00:35:11.940 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2727235 00:35:11.940 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2727235 00:35:11.940 { 00:35:11.940 "results": [ 00:35:11.940 { 00:35:11.940 "job": "Nvme0n1", 00:35:11.940 "core_mask": "0x4", 00:35:11.940 "workload": "verify", 00:35:11.940 "status": "terminated", 00:35:11.940 "verify_range": { 00:35:11.940 "start": 0, 00:35:11.940 "length": 16384 00:35:11.940 }, 00:35:11.940 "queue_depth": 128, 00:35:11.940 "io_size": 4096, 00:35:11.940 "runtime": 26.719495, 00:35:11.940 "iops": 9742.399697299668, 00:35:11.940 "mibps": 38.05624881757683, 00:35:11.940 "io_failed": 0, 00:35:11.940 "io_timeout": 0, 00:35:11.940 "avg_latency_us": 13118.49789114601, 00:35:11.940 "min_latency_us": 343.04, 00:35:11.940 "max_latency_us": 3019898.88 00:35:11.940 } 00:35:11.940 ], 00:35:11.940 "core_count": 1 00:35:11.940 } 00:35:12.513 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2727235 00:35:12.513 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:12.513 [2024-12-07 11:46:42.686600] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:35:12.513 [2024-12-07 11:46:42.686717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727235 ] 00:35:12.513 [2024-12-07 11:46:42.793337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.513 [2024-12-07 11:46:42.866830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:12.513 Running I/O for 90 seconds... 00:35:12.513 8477.00 IOPS, 33.11 MiB/s [2024-12-07T10:47:11.867Z] 8555.50 IOPS, 33.42 MiB/s [2024-12-07T10:47:11.867Z] 8606.00 IOPS, 33.62 MiB/s [2024-12-07T10:47:11.867Z] 8610.25 IOPS, 33.63 MiB/s [2024-12-07T10:47:11.867Z] 8841.20 IOPS, 34.54 MiB/s [2024-12-07T10:47:11.867Z] 9298.17 IOPS, 36.32 MiB/s [2024-12-07T10:47:11.867Z] 9618.71 IOPS, 37.57 MiB/s [2024-12-07T10:47:11.867Z] 9566.38 IOPS, 37.37 MiB/s [2024-12-07T10:47:11.867Z] 9464.11 IOPS, 36.97 MiB/s [2024-12-07T10:47:11.867Z] 9379.90 IOPS, 36.64 MiB/s [2024-12-07T10:47:11.867Z] 9309.36 IOPS, 36.36 MiB/s [2024-12-07T10:47:11.867Z] [2024-12-07 11:46:56.342617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.513 [2024-12-07 11:46:56.342661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.342706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.342717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.342732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.342740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.342754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.342762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.342776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.342785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.342798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.342806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.342820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.342828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.342843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.342851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.342864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.342872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.342886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.342899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.342913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.342923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.342937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.342946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.342960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.342967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.342981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.342990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.343004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.343017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.343031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.343038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.343052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.343060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.344606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.344627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.344652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.344660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.344675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.513 [2024-12-07 11:46:56.344684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:12.513 [2024-12-07 11:46:56.344699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.344707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.344721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.344729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.344746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.344754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.344769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.344776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.344790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.344799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.344813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.344821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.344835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.344842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.344858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.344866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.344880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.344894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.344909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.344917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.344932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.344939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.344954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.344962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.344977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.344984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.344999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:12.514 [2024-12-07 11:46:56.345635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.514 [2024-12-07 11:46:56.345644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.345981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.345989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.515 [2024-12-07 11:46:56.346734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.515 [2024-12-07 11:46:56.346742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.346761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.346768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.346821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.346831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.346850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.346859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.346877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.346885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.346903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.346911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.346930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.346937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.346955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.346963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.346981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.346988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:46:56.347590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:46:56.347598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.516 9123.00 IOPS, 35.64 MiB/s [2024-12-07T10:47:11.870Z] 8421.23 IOPS, 32.90 MiB/s [2024-12-07T10:47:11.870Z] 7819.71 IOPS, 30.55 MiB/s [2024-12-07T10:47:11.870Z] 7441.07 IOPS, 29.07 MiB/s [2024-12-07T10:47:11.870Z] 7707.06 IOPS, 30.11 MiB/s [2024-12-07T10:47:11.870Z] 7935.24 IOPS, 31.00 MiB/s [2024-12-07T10:47:11.870Z] 8362.39 IOPS, 32.67 MiB/s [2024-12-07T10:47:11.870Z] 8742.95 IOPS, 34.15 MiB/s [2024-12-07T10:47:11.870Z] 8963.55 IOPS, 35.01 MiB/s [2024-12-07T10:47:11.870Z] 9095.67 IOPS, 35.53 MiB/s [2024-12-07T10:47:11.870Z] 9201.36 IOPS, 35.94 MiB/s [2024-12-07T10:47:11.870Z] 9473.26 IOPS, 37.00 MiB/s [2024-12-07T10:47:11.870Z] 9726.79 IOPS, 38.00 MiB/s [2024-12-07T10:47:11.870Z] [2024-12-07 11:47:09.056354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:47:09.056400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:47:09.056443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:47:09.056453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:47:09.056468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:47:09.056476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:47:09.056494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:47:09.056502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:47:09.056516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.516 [2024-12-07 11:47:09.056524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:12.516 [2024-12-07 11:47:09.056668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.517 [2024-12-07 11:47:09.056681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.056705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.517 [2024-12-07 11:47:09.056714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.056728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.517 [2024-12-07 11:47:09.056737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.056752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.517 [2024-12-07 11:47:09.056761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.056775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.517 [2024-12-07 11:47:09.056783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.056798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.517 [2024-12-07 11:47:09.056808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.056822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.517 [2024-12-07 11:47:09.056831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.056852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.517 [2024-12-07 11:47:09.056860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.056873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.517 [2024-12-07 11:47:09.056881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.057421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.517 [2024-12-07 11:47:09.057439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.057460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.517 [2024-12-07 11:47:09.057469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.057482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.517 [2024-12-07 11:47:09.057490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.057503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.517 [2024-12-07 11:47:09.057512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.057526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.517 [2024-12-07 11:47:09.057534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.057547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.517 [2024-12-07 11:47:09.057555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.057569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.517 [2024-12-07 11:47:09.057578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.057592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.517 [2024-12-07 11:47:09.057599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.057613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.517 [2024-12-07 11:47:09.057621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.057635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.517 [2024-12-07 11:47:09.057643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.057656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.517 [2024-12-07 11:47:09.057663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:12.517 [2024-12-07 11:47:09.057677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.517 [2024-12-07 11:47:09.057686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:12.517 9820.56 IOPS, 38.36 MiB/s [2024-12-07T10:47:11.871Z] 9774.31 IOPS, 38.18 MiB/s [2024-12-07T10:47:11.871Z] Received shutdown signal, test time was about 26.720125 seconds 00:35:12.517 00:35:12.517 Latency(us) 00:35:12.517 [2024-12-07T10:47:11.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:12.517 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:12.517 Verification LBA range: start 0x0 length 0x4000 00:35:12.517 Nvme0n1 : 26.72 9742.40 38.06 0.00 0.00 13118.50 343.04 3019898.88 00:35:12.517 [2024-12-07T10:47:11.871Z] =================================================================================================================== 00:35:12.517 [2024-12-07T10:47:11.871Z] Total : 9742.40 38.06 0.00 0.00 13118.50 343.04 3019898.88 00:35:12.517 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:12.778 rmmod nvme_tcp 00:35:12.778 rmmod nvme_fabrics 00:35:12.778 rmmod nvme_keyring 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2726869 ']' 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2726869 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2726869 ']' 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2726869 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:12.778 11:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2726869 00:35:12.778 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:12.778 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:12.778 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2726869' 00:35:12.778 killing process with pid 2726869 00:35:12.778 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2726869 00:35:12.778 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2726869 00:35:13.721 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:13.721 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:13.721 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:13.721 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:35:13.721 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:13.721 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:35:13.721 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:35:13.721 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:13.721 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:13.721 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.722 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:13.722 11:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.636 11:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:15.636 00:35:15.636 real 0m42.060s 00:35:15.636 user 1m47.916s 00:35:15.636 sys 0m11.563s 00:35:15.636 11:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:15.636 11:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:15.636 ************************************ 00:35:15.636 END TEST nvmf_host_multipath_status 00:35:15.636 ************************************ 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.899 ************************************ 00:35:15.899 START TEST nvmf_discovery_remove_ifc 00:35:15.899 ************************************ 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:15.899 * Looking for test storage... 00:35:15.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:15.899 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:15.900 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:16.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.178 --rc genhtml_branch_coverage=1 00:35:16.178 --rc genhtml_function_coverage=1 00:35:16.178 --rc genhtml_legend=1 00:35:16.178 --rc geninfo_all_blocks=1 00:35:16.178 --rc geninfo_unexecuted_blocks=1 00:35:16.178 00:35:16.178 ' 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:16.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.178 --rc genhtml_branch_coverage=1 00:35:16.178 --rc genhtml_function_coverage=1 00:35:16.178 --rc genhtml_legend=1 00:35:16.178 --rc geninfo_all_blocks=1 00:35:16.178 --rc geninfo_unexecuted_blocks=1 00:35:16.178 00:35:16.178 ' 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:16.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.178 --rc genhtml_branch_coverage=1 00:35:16.178 --rc genhtml_function_coverage=1 00:35:16.178 --rc genhtml_legend=1 00:35:16.178 --rc geninfo_all_blocks=1 00:35:16.178 --rc geninfo_unexecuted_blocks=1 00:35:16.178 00:35:16.178 ' 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:16.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.178 --rc genhtml_branch_coverage=1 00:35:16.178 --rc genhtml_function_coverage=1 00:35:16.178 --rc genhtml_legend=1 00:35:16.178 --rc geninfo_all_blocks=1 00:35:16.178 --rc geninfo_unexecuted_blocks=1 00:35:16.178 00:35:16.178 ' 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:16.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:16.178 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:16.179 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:16.179 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:16.179 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:16.179 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:16.179 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:16.179 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:16.179 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.179 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:16.179 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.179 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:16.179 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:16.179 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:35:16.179 11:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.322 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:24.322 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:35:24.322 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:24.322 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:24.323 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:24.323 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:24.323 Found net devices under 0000:31:00.0: cvl_0_0 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:24.323 Found net devices under 0000:31:00.1: cvl_0_1 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:24.323 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:24.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:24.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:35:24.323 00:35:24.324 --- 10.0.0.2 ping statistics --- 00:35:24.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.324 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:24.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:24.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:35:24.324 00:35:24.324 --- 10.0.0.1 ping statistics --- 00:35:24.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.324 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2737227 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2737227 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2737227 ']' 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.324 11:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.324 [2024-12-07 11:47:22.760112] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:35:24.324 [2024-12-07 11:47:22.760241] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:24.324 [2024-12-07 11:47:22.924917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.324 [2024-12-07 11:47:23.050933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:24.324 [2024-12-07 11:47:23.050999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:24.324 [2024-12-07 11:47:23.051028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:24.324 [2024-12-07 11:47:23.051041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:24.324 [2024-12-07 11:47:23.051058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:24.324 [2024-12-07 11:47:23.052557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.324 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:24.324 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:24.324 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:24.324 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:24.324 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.324 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:24.324 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:24.324 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.324 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.324 [2024-12-07 11:47:23.622645] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.324 [2024-12-07 11:47:23.630880] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:24.324 null0 00:35:24.324 [2024-12-07 11:47:23.662891] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:24.584 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.584 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2737535 00:35:24.584 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2737535 /tmp/host.sock 00:35:24.584 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:24.584 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2737535 ']' 00:35:24.584 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:35:24.584 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.584 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:24.584 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:24.584 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.584 11:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.584 [2024-12-07 11:47:23.778358] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:35:24.584 [2024-12-07 11:47:23.778493] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2737535 ] 00:35:24.584 [2024-12-07 11:47:23.919703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.844 [2024-12-07 11:47:24.019080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.414 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.414 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:25.414 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:25.414 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:25.414 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.414 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:25.414 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.414 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:25.414 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.414 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:25.674 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.674 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:25.674 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.674 11:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:26.615 [2024-12-07 11:47:25.874120] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:26.615 [2024-12-07 11:47:25.874156] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:26.615 [2024-12-07 11:47:25.874184] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:26.615 [2024-12-07 11:47:25.960460] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:26.875 [2024-12-07 11:47:26.185944] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:26.875 [2024-12-07 11:47:26.187480] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500039e980:1 started. 00:35:26.875 [2024-12-07 11:47:26.189386] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:26.875 [2024-12-07 11:47:26.189448] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:26.875 [2024-12-07 11:47:26.189497] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:26.875 [2024-12-07 11:47:26.189520] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:26.875 [2024-12-07 11:47:26.189553] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:26.875 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.875 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:26.875 [2024-12-07 11:47:26.192946] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500039e980 was disconnected and freed. delete nvme_qpair. 00:35:26.875 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:26.875 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:26.875 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:26.875 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:26.875 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.875 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:26.875 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:26.875 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.875 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:26.875 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:27.134 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:27.134 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:27.134 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:27.134 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:27.134 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.134 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.134 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:27.134 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:27.134 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:27.134 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.134 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:27.134 11:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:28.071 11:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:28.071 11:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:28.071 11:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:28.071 11:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:28.071 11:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.071 11:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:28.071 11:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:28.331 11:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.331 11:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:28.331 11:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:29.273 11:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:29.273 11:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:29.273 11:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:29.273 11:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.273 11:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:29.273 11:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:29.273 11:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:29.273 11:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.273 11:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:29.273 11:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:30.216 11:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:30.216 11:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:30.216 11:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:30.216 11:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:30.216 11:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.216 11:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:30.216 11:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.216 11:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.216 11:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:30.216 11:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:31.600 11:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:31.600 11:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:31.600 11:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:31.600 11:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.600 11:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:31.600 11:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:31.600 11:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:31.600 11:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.600 11:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:31.600 11:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:32.541 11:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:32.541 11:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:32.541 11:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:32.541 11:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.541 11:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:32.541 11:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:32.541 11:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:32.541 [2024-12-07 11:47:31.629319] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:32.541 [2024-12-07 11:47:31.629384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:32.541 [2024-12-07 11:47:31.629402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.541 [2024-12-07 11:47:31.629418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:32.541 [2024-12-07 11:47:31.629429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.541 [2024-12-07 11:47:31.629441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:32.541 [2024-12-07 11:47:31.629453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.541 [2024-12-07 11:47:31.629469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:32.541 [2024-12-07 11:47:31.629480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.541 [2024-12-07 11:47:31.629492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:32.541 [2024-12-07 11:47:31.629503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.541 [2024-12-07 11:47:31.629514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:35:32.541 [2024-12-07 11:47:31.639340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:35:32.541 11:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.541 [2024-12-07 11:47:31.649375] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:32.541 [2024-12-07 11:47:31.649410] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:32.541 [2024-12-07 11:47:31.649423] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:32.541 [2024-12-07 11:47:31.649432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:32.541 [2024-12-07 11:47:31.649465] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:32.541 11:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:32.541 11:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:33.484 11:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:33.484 11:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:33.484 11:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:33.484 11:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:33.484 11:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.484 11:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.484 11:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:33.484 [2024-12-07 11:47:32.681126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:33.484 [2024-12-07 11:47:32.681172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e480 with addr=10.0.0.2, port=4420 00:35:33.484 [2024-12-07 11:47:32.681191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:35:33.484 [2024-12-07 11:47:32.681224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:35:33.484 [2024-12-07 11:47:32.681285] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:33.484 [2024-12-07 11:47:32.681319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:33.484 [2024-12-07 11:47:32.681332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:33.484 [2024-12-07 11:47:32.681345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:33.484 [2024-12-07 11:47:32.681358] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:33.484 [2024-12-07 11:47:32.681371] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:33.484 [2024-12-07 11:47:32.681381] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:33.484 [2024-12-07 11:47:32.681395] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:33.484 [2024-12-07 11:47:32.681404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:33.484 11:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.484 11:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:33.484 11:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:34.425 [2024-12-07 11:47:33.683787] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:34.425 [2024-12-07 11:47:33.683819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:34.425 [2024-12-07 11:47:33.683842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:34.425 [2024-12-07 11:47:33.683855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:34.425 [2024-12-07 11:47:33.683866] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:34.425 [2024-12-07 11:47:33.683877] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:34.425 [2024-12-07 11:47:33.683886] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:34.425 [2024-12-07 11:47:33.683894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:34.425 [2024-12-07 11:47:33.683928] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:34.425 [2024-12-07 11:47:33.683961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.425 [2024-12-07 11:47:33.683978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.425 [2024-12-07 11:47:33.683993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.425 [2024-12-07 11:47:33.684005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.425 [2024-12-07 11:47:33.684025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.425 [2024-12-07 11:47:33.684036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.425 [2024-12-07 11:47:33.684055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.425 [2024-12-07 11:47:33.684065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.425 [2024-12-07 11:47:33.684078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.425 [2024-12-07 11:47:33.684089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.425 [2024-12-07 11:47:33.684100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:34.425 [2024-12-07 11:47:33.684188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039df80 (9): Bad file descriptor 00:35:34.425 [2024-12-07 11:47:33.685252] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:34.425 [2024-12-07 11:47:33.685281] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:34.425 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:34.425 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:34.425 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:34.425 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.425 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:34.425 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:34.425 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:34.425 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:34.685 11:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:35.624 11:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:35.624 11:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:35.624 11:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:35.624 11:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.624 11:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:35.624 11:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.624 11:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:35.624 11:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.883 11:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:35.883 11:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:36.453 [2024-12-07 11:47:35.739234] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:36.453 [2024-12-07 11:47:35.739262] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:36.453 [2024-12-07 11:47:35.739296] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:36.714 [2024-12-07 11:47:35.826583] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:36.714 [2024-12-07 11:47:35.925697] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:36.714 [2024-12-07 11:47:35.926894] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61500039f600:1 started. 00:35:36.714 [2024-12-07 11:47:35.928770] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:36.714 [2024-12-07 11:47:35.928818] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:36.714 [2024-12-07 11:47:35.928863] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:36.714 [2024-12-07 11:47:35.928884] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:36.714 [2024-12-07 11:47:35.928897] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:36.714 [2024-12-07 11:47:35.936722] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61500039f600 was disconnected and freed. delete nvme_qpair. 00:35:36.714 11:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:36.714 11:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:36.714 11:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:36.714 11:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.714 11:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:36.714 11:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:36.714 11:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:36.714 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.714 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:36.714 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:36.714 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2737535 00:35:36.714 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2737535 ']' 00:35:36.714 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2737535 00:35:36.714 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:36.714 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:36.714 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2737535 00:35:36.973 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:36.973 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:36.973 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2737535' 00:35:36.973 killing process with pid 2737535 00:35:36.973 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2737535 00:35:36.973 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2737535 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:37.543 rmmod nvme_tcp 00:35:37.543 rmmod nvme_fabrics 00:35:37.543 rmmod nvme_keyring 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2737227 ']' 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2737227 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2737227 ']' 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2737227 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2737227 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2737227' 00:35:37.543 killing process with pid 2737227 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2737227 00:35:37.543 11:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2737227 00:35:38.113 11:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:38.113 11:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:38.113 11:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:38.113 11:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:38.113 11:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:38.113 11:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:38.113 11:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:38.113 11:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:38.113 11:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:38.113 11:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.113 11:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.113 11:47:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:40.656 00:35:40.656 real 0m24.452s 00:35:40.656 user 0m29.101s 00:35:40.656 sys 0m7.189s 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:40.656 ************************************ 00:35:40.656 END TEST nvmf_discovery_remove_ifc 00:35:40.656 ************************************ 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.656 ************************************ 00:35:40.656 START TEST nvmf_identify_kernel_target 00:35:40.656 ************************************ 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:40.656 * Looking for test storage... 00:35:40.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:40.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.656 --rc genhtml_branch_coverage=1 00:35:40.656 --rc genhtml_function_coverage=1 00:35:40.656 --rc genhtml_legend=1 00:35:40.656 --rc geninfo_all_blocks=1 00:35:40.656 --rc geninfo_unexecuted_blocks=1 00:35:40.656 00:35:40.656 ' 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:40.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.656 --rc genhtml_branch_coverage=1 00:35:40.656 --rc genhtml_function_coverage=1 00:35:40.656 --rc genhtml_legend=1 00:35:40.656 --rc geninfo_all_blocks=1 00:35:40.656 --rc geninfo_unexecuted_blocks=1 00:35:40.656 00:35:40.656 ' 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:40.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.656 --rc genhtml_branch_coverage=1 00:35:40.656 --rc genhtml_function_coverage=1 00:35:40.656 --rc genhtml_legend=1 00:35:40.656 --rc geninfo_all_blocks=1 00:35:40.656 --rc geninfo_unexecuted_blocks=1 00:35:40.656 00:35:40.656 ' 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:40.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.656 --rc genhtml_branch_coverage=1 00:35:40.656 --rc genhtml_function_coverage=1 00:35:40.656 --rc genhtml_legend=1 00:35:40.656 --rc geninfo_all_blocks=1 00:35:40.656 --rc geninfo_unexecuted_blocks=1 00:35:40.656 00:35:40.656 ' 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:40.656 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:40.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:40.657 11:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:47.397 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:47.398 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:47.398 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:47.398 Found net devices under 0000:31:00.0: cvl_0_0 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:47.398 Found net devices under 0000:31:00.1: cvl_0_1 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:47.398 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:47.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:47.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:35:47.660 00:35:47.660 --- 10.0.0.2 ping statistics --- 00:35:47.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.660 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:47.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:47.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:35:47.660 00:35:47.660 --- 10.0.0.1 ping statistics --- 00:35:47.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.660 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:47.660 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:47.661 11:47:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:50.965 Waiting for block devices as requested 00:35:50.965 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:50.965 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:51.226 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:51.226 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:51.226 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:51.488 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:51.488 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:51.488 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:51.749 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:51.749 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:52.010 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:52.010 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:52.010 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:52.010 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:52.271 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:52.271 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:52.271 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:52.532 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:52.532 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:52.532 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:52.532 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:52.532 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:52.532 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:52.532 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:52.532 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:52.532 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:52.532 No valid GPT data, bailing 00:35:52.532 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:52.794 11:47:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:35:52.794 00:35:52.794 Discovery Log Number of Records 2, Generation counter 2 00:35:52.794 =====Discovery Log Entry 0====== 00:35:52.794 trtype: tcp 00:35:52.794 adrfam: ipv4 00:35:52.794 subtype: current discovery subsystem 00:35:52.794 treq: not specified, sq flow control disable supported 00:35:52.794 portid: 1 00:35:52.794 trsvcid: 4420 00:35:52.794 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:52.794 traddr: 10.0.0.1 00:35:52.794 eflags: none 00:35:52.794 sectype: none 00:35:52.794 =====Discovery Log Entry 1====== 00:35:52.794 trtype: tcp 00:35:52.794 adrfam: ipv4 00:35:52.794 subtype: nvme subsystem 00:35:52.794 treq: not specified, sq flow control disable supported 00:35:52.794 portid: 1 00:35:52.794 trsvcid: 4420 00:35:52.794 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:52.794 traddr: 10.0.0.1 00:35:52.794 eflags: none 00:35:52.794 sectype: none 00:35:52.794 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:52.794 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:53.059 ===================================================== 00:35:53.059 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:53.059 ===================================================== 00:35:53.059 Controller Capabilities/Features 00:35:53.059 ================================ 00:35:53.059 Vendor ID: 0000 00:35:53.059 Subsystem Vendor ID: 0000 00:35:53.059 Serial Number: ae55f7950aa4dd4ab673 00:35:53.059 Model Number: Linux 00:35:53.059 Firmware Version: 6.8.9-20 00:35:53.059 Recommended Arb Burst: 0 00:35:53.059 IEEE OUI Identifier: 00 00 00 00:35:53.059 Multi-path I/O 00:35:53.059 May have multiple subsystem ports: No 00:35:53.059 May have multiple controllers: No 00:35:53.059 Associated with SR-IOV VF: No 00:35:53.059 Max Data Transfer Size: Unlimited 00:35:53.059 Max Number of Namespaces: 0 00:35:53.059 Max Number of I/O Queues: 1024 00:35:53.059 NVMe Specification Version (VS): 1.3 00:35:53.059 NVMe Specification Version (Identify): 1.3 00:35:53.059 Maximum Queue Entries: 1024 00:35:53.059 Contiguous Queues Required: No 00:35:53.059 Arbitration Mechanisms Supported 00:35:53.059 Weighted Round Robin: Not Supported 00:35:53.059 Vendor Specific: Not Supported 00:35:53.059 Reset Timeout: 7500 ms 00:35:53.059 Doorbell Stride: 4 bytes 00:35:53.059 NVM Subsystem Reset: Not Supported 00:35:53.059 Command Sets Supported 00:35:53.059 NVM Command Set: Supported 00:35:53.059 Boot Partition: Not Supported 00:35:53.059 Memory Page Size Minimum: 4096 bytes 00:35:53.059 Memory Page Size Maximum: 4096 bytes 00:35:53.059 Persistent Memory Region: Not Supported 00:35:53.059 Optional Asynchronous Events Supported 00:35:53.059 Namespace Attribute Notices: Not Supported 00:35:53.059 Firmware Activation Notices: Not Supported 00:35:53.059 ANA Change Notices: Not Supported 00:35:53.059 PLE Aggregate Log Change Notices: Not Supported 00:35:53.059 LBA Status Info Alert Notices: Not Supported 00:35:53.059 EGE Aggregate Log Change Notices: Not Supported 00:35:53.059 Normal NVM Subsystem Shutdown event: Not Supported 00:35:53.059 Zone Descriptor Change Notices: Not Supported 00:35:53.059 Discovery Log Change Notices: Supported 00:35:53.059 Controller Attributes 00:35:53.059 128-bit Host Identifier: Not Supported 00:35:53.059 Non-Operational Permissive Mode: Not Supported 00:35:53.059 NVM Sets: Not Supported 00:35:53.059 Read Recovery Levels: Not Supported 00:35:53.059 Endurance Groups: Not Supported 00:35:53.059 Predictable Latency Mode: Not Supported 00:35:53.059 Traffic Based Keep ALive: Not Supported 00:35:53.059 Namespace Granularity: Not Supported 00:35:53.059 SQ Associations: Not Supported 00:35:53.059 UUID List: Not Supported 00:35:53.059 Multi-Domain Subsystem: Not Supported 00:35:53.059 Fixed Capacity Management: Not Supported 00:35:53.059 Variable Capacity Management: Not Supported 00:35:53.059 Delete Endurance Group: Not Supported 00:35:53.059 Delete NVM Set: Not Supported 00:35:53.059 Extended LBA Formats Supported: Not Supported 00:35:53.059 Flexible Data Placement Supported: Not Supported 00:35:53.059 00:35:53.059 Controller Memory Buffer Support 00:35:53.059 ================================ 00:35:53.059 Supported: No 00:35:53.059 00:35:53.059 Persistent Memory Region Support 00:35:53.059 ================================ 00:35:53.059 Supported: No 00:35:53.059 00:35:53.059 Admin Command Set Attributes 00:35:53.059 ============================ 00:35:53.059 Security Send/Receive: Not Supported 00:35:53.059 Format NVM: Not Supported 00:35:53.059 Firmware Activate/Download: Not Supported 00:35:53.059 Namespace Management: Not Supported 00:35:53.059 Device Self-Test: Not Supported 00:35:53.059 Directives: Not Supported 00:35:53.059 NVMe-MI: Not Supported 00:35:53.059 Virtualization Management: Not Supported 00:35:53.059 Doorbell Buffer Config: Not Supported 00:35:53.059 Get LBA Status Capability: Not Supported 00:35:53.059 Command & Feature Lockdown Capability: Not Supported 00:35:53.059 Abort Command Limit: 1 00:35:53.059 Async Event Request Limit: 1 00:35:53.059 Number of Firmware Slots: N/A 00:35:53.059 Firmware Slot 1 Read-Only: N/A 00:35:53.059 Firmware Activation Without Reset: N/A 00:35:53.059 Multiple Update Detection Support: N/A 00:35:53.059 Firmware Update Granularity: No Information Provided 00:35:53.059 Per-Namespace SMART Log: No 00:35:53.059 Asymmetric Namespace Access Log Page: Not Supported 00:35:53.059 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:53.059 Command Effects Log Page: Not Supported 00:35:53.059 Get Log Page Extended Data: Supported 00:35:53.060 Telemetry Log Pages: Not Supported 00:35:53.060 Persistent Event Log Pages: Not Supported 00:35:53.060 Supported Log Pages Log Page: May Support 00:35:53.060 Commands Supported & Effects Log Page: Not Supported 00:35:53.060 Feature Identifiers & Effects Log Page:May Support 00:35:53.060 NVMe-MI Commands & Effects Log Page: May Support 00:35:53.060 Data Area 4 for Telemetry Log: Not Supported 00:35:53.060 Error Log Page Entries Supported: 1 00:35:53.060 Keep Alive: Not Supported 00:35:53.060 00:35:53.060 NVM Command Set Attributes 00:35:53.060 ========================== 00:35:53.060 Submission Queue Entry Size 00:35:53.060 Max: 1 00:35:53.060 Min: 1 00:35:53.060 Completion Queue Entry Size 00:35:53.060 Max: 1 00:35:53.060 Min: 1 00:35:53.060 Number of Namespaces: 0 00:35:53.060 Compare Command: Not Supported 00:35:53.060 Write Uncorrectable Command: Not Supported 00:35:53.060 Dataset Management Command: Not Supported 00:35:53.060 Write Zeroes Command: Not Supported 00:35:53.060 Set Features Save Field: Not Supported 00:35:53.060 Reservations: Not Supported 00:35:53.060 Timestamp: Not Supported 00:35:53.060 Copy: Not Supported 00:35:53.060 Volatile Write Cache: Not Present 00:35:53.060 Atomic Write Unit (Normal): 1 00:35:53.060 Atomic Write Unit (PFail): 1 00:35:53.060 Atomic Compare & Write Unit: 1 00:35:53.060 Fused Compare & Write: Not Supported 00:35:53.060 Scatter-Gather List 00:35:53.060 SGL Command Set: Supported 00:35:53.060 SGL Keyed: Not Supported 00:35:53.060 SGL Bit Bucket Descriptor: Not Supported 00:35:53.060 SGL Metadata Pointer: Not Supported 00:35:53.060 Oversized SGL: Not Supported 00:35:53.060 SGL Metadata Address: Not Supported 00:35:53.060 SGL Offset: Supported 00:35:53.060 Transport SGL Data Block: Not Supported 00:35:53.060 Replay Protected Memory Block: Not Supported 00:35:53.060 00:35:53.060 Firmware Slot Information 00:35:53.060 ========================= 00:35:53.060 Active slot: 0 00:35:53.060 00:35:53.060 00:35:53.060 Error Log 00:35:53.060 ========= 00:35:53.060 00:35:53.060 Active Namespaces 00:35:53.060 ================= 00:35:53.060 Discovery Log Page 00:35:53.060 ================== 00:35:53.060 Generation Counter: 2 00:35:53.060 Number of Records: 2 00:35:53.060 Record Format: 0 00:35:53.060 00:35:53.060 Discovery Log Entry 0 00:35:53.060 ---------------------- 00:35:53.060 Transport Type: 3 (TCP) 00:35:53.060 Address Family: 1 (IPv4) 00:35:53.060 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:53.060 Entry Flags: 00:35:53.060 Duplicate Returned Information: 0 00:35:53.060 Explicit Persistent Connection Support for Discovery: 0 00:35:53.060 Transport Requirements: 00:35:53.060 Secure Channel: Not Specified 00:35:53.060 Port ID: 1 (0x0001) 00:35:53.060 Controller ID: 65535 (0xffff) 00:35:53.060 Admin Max SQ Size: 32 00:35:53.060 Transport Service Identifier: 4420 00:35:53.060 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:53.060 Transport Address: 10.0.0.1 00:35:53.060 Discovery Log Entry 1 00:35:53.060 ---------------------- 00:35:53.060 Transport Type: 3 (TCP) 00:35:53.060 Address Family: 1 (IPv4) 00:35:53.060 Subsystem Type: 2 (NVM Subsystem) 00:35:53.060 Entry Flags: 00:35:53.060 Duplicate Returned Information: 0 00:35:53.060 Explicit Persistent Connection Support for Discovery: 0 00:35:53.060 Transport Requirements: 00:35:53.060 Secure Channel: Not Specified 00:35:53.060 Port ID: 1 (0x0001) 00:35:53.060 Controller ID: 65535 (0xffff) 00:35:53.060 Admin Max SQ Size: 32 00:35:53.060 Transport Service Identifier: 4420 00:35:53.060 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:53.060 Transport Address: 10.0.0.1 00:35:53.060 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:53.060 get_feature(0x01) failed 00:35:53.060 get_feature(0x02) failed 00:35:53.060 get_feature(0x04) failed 00:35:53.060 ===================================================== 00:35:53.060 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:53.060 ===================================================== 00:35:53.060 Controller Capabilities/Features 00:35:53.060 ================================ 00:35:53.060 Vendor ID: 0000 00:35:53.060 Subsystem Vendor ID: 0000 00:35:53.060 Serial Number: 6981fe60900945671fb0 00:35:53.060 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:53.060 Firmware Version: 6.8.9-20 00:35:53.060 Recommended Arb Burst: 6 00:35:53.060 IEEE OUI Identifier: 00 00 00 00:35:53.060 Multi-path I/O 00:35:53.060 May have multiple subsystem ports: Yes 00:35:53.060 May have multiple controllers: Yes 00:35:53.060 Associated with SR-IOV VF: No 00:35:53.060 Max Data Transfer Size: Unlimited 00:35:53.060 Max Number of Namespaces: 1024 00:35:53.060 Max Number of I/O Queues: 128 00:35:53.060 NVMe Specification Version (VS): 1.3 00:35:53.060 NVMe Specification Version (Identify): 1.3 00:35:53.060 Maximum Queue Entries: 1024 00:35:53.060 Contiguous Queues Required: No 00:35:53.060 Arbitration Mechanisms Supported 00:35:53.060 Weighted Round Robin: Not Supported 00:35:53.060 Vendor Specific: Not Supported 00:35:53.060 Reset Timeout: 7500 ms 00:35:53.060 Doorbell Stride: 4 bytes 00:35:53.060 NVM Subsystem Reset: Not Supported 00:35:53.060 Command Sets Supported 00:35:53.060 NVM Command Set: Supported 00:35:53.060 Boot Partition: Not Supported 00:35:53.060 Memory Page Size Minimum: 4096 bytes 00:35:53.060 Memory Page Size Maximum: 4096 bytes 00:35:53.060 Persistent Memory Region: Not Supported 00:35:53.060 Optional Asynchronous Events Supported 00:35:53.060 Namespace Attribute Notices: Supported 00:35:53.060 Firmware Activation Notices: Not Supported 00:35:53.060 ANA Change Notices: Supported 00:35:53.060 PLE Aggregate Log Change Notices: Not Supported 00:35:53.060 LBA Status Info Alert Notices: Not Supported 00:35:53.060 EGE Aggregate Log Change Notices: Not Supported 00:35:53.060 Normal NVM Subsystem Shutdown event: Not Supported 00:35:53.060 Zone Descriptor Change Notices: Not Supported 00:35:53.060 Discovery Log Change Notices: Not Supported 00:35:53.060 Controller Attributes 00:35:53.060 128-bit Host Identifier: Supported 00:35:53.060 Non-Operational Permissive Mode: Not Supported 00:35:53.060 NVM Sets: Not Supported 00:35:53.060 Read Recovery Levels: Not Supported 00:35:53.060 Endurance Groups: Not Supported 00:35:53.060 Predictable Latency Mode: Not Supported 00:35:53.060 Traffic Based Keep ALive: Supported 00:35:53.060 Namespace Granularity: Not Supported 00:35:53.060 SQ Associations: Not Supported 00:35:53.060 UUID List: Not Supported 00:35:53.060 Multi-Domain Subsystem: Not Supported 00:35:53.060 Fixed Capacity Management: Not Supported 00:35:53.060 Variable Capacity Management: Not Supported 00:35:53.060 Delete Endurance Group: Not Supported 00:35:53.060 Delete NVM Set: Not Supported 00:35:53.060 Extended LBA Formats Supported: Not Supported 00:35:53.060 Flexible Data Placement Supported: Not Supported 00:35:53.060 00:35:53.060 Controller Memory Buffer Support 00:35:53.061 ================================ 00:35:53.061 Supported: No 00:35:53.061 00:35:53.061 Persistent Memory Region Support 00:35:53.061 ================================ 00:35:53.061 Supported: No 00:35:53.061 00:35:53.061 Admin Command Set Attributes 00:35:53.061 ============================ 00:35:53.061 Security Send/Receive: Not Supported 00:35:53.061 Format NVM: Not Supported 00:35:53.061 Firmware Activate/Download: Not Supported 00:35:53.061 Namespace Management: Not Supported 00:35:53.061 Device Self-Test: Not Supported 00:35:53.061 Directives: Not Supported 00:35:53.061 NVMe-MI: Not Supported 00:35:53.061 Virtualization Management: Not Supported 00:35:53.061 Doorbell Buffer Config: Not Supported 00:35:53.061 Get LBA Status Capability: Not Supported 00:35:53.061 Command & Feature Lockdown Capability: Not Supported 00:35:53.061 Abort Command Limit: 4 00:35:53.061 Async Event Request Limit: 4 00:35:53.061 Number of Firmware Slots: N/A 00:35:53.061 Firmware Slot 1 Read-Only: N/A 00:35:53.061 Firmware Activation Without Reset: N/A 00:35:53.061 Multiple Update Detection Support: N/A 00:35:53.061 Firmware Update Granularity: No Information Provided 00:35:53.061 Per-Namespace SMART Log: Yes 00:35:53.061 Asymmetric Namespace Access Log Page: Supported 00:35:53.061 ANA Transition Time : 10 sec 00:35:53.061 00:35:53.061 Asymmetric Namespace Access Capabilities 00:35:53.061 ANA Optimized State : Supported 00:35:53.061 ANA Non-Optimized State : Supported 00:35:53.061 ANA Inaccessible State : Supported 00:35:53.061 ANA Persistent Loss State : Supported 00:35:53.061 ANA Change State : Supported 00:35:53.061 ANAGRPID is not changed : No 00:35:53.061 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:53.061 00:35:53.061 ANA Group Identifier Maximum : 128 00:35:53.061 Number of ANA Group Identifiers : 128 00:35:53.061 Max Number of Allowed Namespaces : 1024 00:35:53.061 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:53.061 Command Effects Log Page: Supported 00:35:53.061 Get Log Page Extended Data: Supported 00:35:53.061 Telemetry Log Pages: Not Supported 00:35:53.061 Persistent Event Log Pages: Not Supported 00:35:53.061 Supported Log Pages Log Page: May Support 00:35:53.061 Commands Supported & Effects Log Page: Not Supported 00:35:53.061 Feature Identifiers & Effects Log Page:May Support 00:35:53.061 NVMe-MI Commands & Effects Log Page: May Support 00:35:53.061 Data Area 4 for Telemetry Log: Not Supported 00:35:53.061 Error Log Page Entries Supported: 128 00:35:53.061 Keep Alive: Supported 00:35:53.061 Keep Alive Granularity: 1000 ms 00:35:53.061 00:35:53.061 NVM Command Set Attributes 00:35:53.061 ========================== 00:35:53.061 Submission Queue Entry Size 00:35:53.061 Max: 64 00:35:53.061 Min: 64 00:35:53.061 Completion Queue Entry Size 00:35:53.061 Max: 16 00:35:53.061 Min: 16 00:35:53.061 Number of Namespaces: 1024 00:35:53.061 Compare Command: Not Supported 00:35:53.061 Write Uncorrectable Command: Not Supported 00:35:53.061 Dataset Management Command: Supported 00:35:53.061 Write Zeroes Command: Supported 00:35:53.061 Set Features Save Field: Not Supported 00:35:53.061 Reservations: Not Supported 00:35:53.061 Timestamp: Not Supported 00:35:53.061 Copy: Not Supported 00:35:53.061 Volatile Write Cache: Present 00:35:53.061 Atomic Write Unit (Normal): 1 00:35:53.061 Atomic Write Unit (PFail): 1 00:35:53.061 Atomic Compare & Write Unit: 1 00:35:53.061 Fused Compare & Write: Not Supported 00:35:53.061 Scatter-Gather List 00:35:53.061 SGL Command Set: Supported 00:35:53.061 SGL Keyed: Not Supported 00:35:53.061 SGL Bit Bucket Descriptor: Not Supported 00:35:53.061 SGL Metadata Pointer: Not Supported 00:35:53.061 Oversized SGL: Not Supported 00:35:53.061 SGL Metadata Address: Not Supported 00:35:53.061 SGL Offset: Supported 00:35:53.061 Transport SGL Data Block: Not Supported 00:35:53.061 Replay Protected Memory Block: Not Supported 00:35:53.061 00:35:53.061 Firmware Slot Information 00:35:53.061 ========================= 00:35:53.061 Active slot: 0 00:35:53.061 00:35:53.061 Asymmetric Namespace Access 00:35:53.061 =========================== 00:35:53.061 Change Count : 0 00:35:53.061 Number of ANA Group Descriptors : 1 00:35:53.061 ANA Group Descriptor : 0 00:35:53.061 ANA Group ID : 1 00:35:53.061 Number of NSID Values : 1 00:35:53.061 Change Count : 0 00:35:53.061 ANA State : 1 00:35:53.061 Namespace Identifier : 1 00:35:53.061 00:35:53.061 Commands Supported and Effects 00:35:53.061 ============================== 00:35:53.061 Admin Commands 00:35:53.061 -------------- 00:35:53.061 Get Log Page (02h): Supported 00:35:53.061 Identify (06h): Supported 00:35:53.061 Abort (08h): Supported 00:35:53.061 Set Features (09h): Supported 00:35:53.061 Get Features (0Ah): Supported 00:35:53.061 Asynchronous Event Request (0Ch): Supported 00:35:53.061 Keep Alive (18h): Supported 00:35:53.061 I/O Commands 00:35:53.061 ------------ 00:35:53.061 Flush (00h): Supported 00:35:53.061 Write (01h): Supported LBA-Change 00:35:53.061 Read (02h): Supported 00:35:53.061 Write Zeroes (08h): Supported LBA-Change 00:35:53.061 Dataset Management (09h): Supported 00:35:53.061 00:35:53.061 Error Log 00:35:53.061 ========= 00:35:53.061 Entry: 0 00:35:53.061 Error Count: 0x3 00:35:53.061 Submission Queue Id: 0x0 00:35:53.061 Command Id: 0x5 00:35:53.061 Phase Bit: 0 00:35:53.061 Status Code: 0x2 00:35:53.061 Status Code Type: 0x0 00:35:53.061 Do Not Retry: 1 00:35:53.061 Error Location: 0x28 00:35:53.061 LBA: 0x0 00:35:53.061 Namespace: 0x0 00:35:53.061 Vendor Log Page: 0x0 00:35:53.061 ----------- 00:35:53.061 Entry: 1 00:35:53.061 Error Count: 0x2 00:35:53.061 Submission Queue Id: 0x0 00:35:53.061 Command Id: 0x5 00:35:53.061 Phase Bit: 0 00:35:53.061 Status Code: 0x2 00:35:53.061 Status Code Type: 0x0 00:35:53.061 Do Not Retry: 1 00:35:53.061 Error Location: 0x28 00:35:53.061 LBA: 0x0 00:35:53.061 Namespace: 0x0 00:35:53.061 Vendor Log Page: 0x0 00:35:53.061 ----------- 00:35:53.061 Entry: 2 00:35:53.061 Error Count: 0x1 00:35:53.061 Submission Queue Id: 0x0 00:35:53.061 Command Id: 0x4 00:35:53.061 Phase Bit: 0 00:35:53.061 Status Code: 0x2 00:35:53.061 Status Code Type: 0x0 00:35:53.061 Do Not Retry: 1 00:35:53.061 Error Location: 0x28 00:35:53.061 LBA: 0x0 00:35:53.061 Namespace: 0x0 00:35:53.061 Vendor Log Page: 0x0 00:35:53.061 00:35:53.061 Number of Queues 00:35:53.061 ================ 00:35:53.061 Number of I/O Submission Queues: 128 00:35:53.061 Number of I/O Completion Queues: 128 00:35:53.061 00:35:53.061 ZNS Specific Controller Data 00:35:53.061 ============================ 00:35:53.061 Zone Append Size Limit: 0 00:35:53.061 00:35:53.061 00:35:53.061 Active Namespaces 00:35:53.061 ================= 00:35:53.061 get_feature(0x05) failed 00:35:53.061 Namespace ID:1 00:35:53.061 Command Set Identifier: NVM (00h) 00:35:53.062 Deallocate: Supported 00:35:53.062 Deallocated/Unwritten Error: Not Supported 00:35:53.062 Deallocated Read Value: Unknown 00:35:53.062 Deallocate in Write Zeroes: Not Supported 00:35:53.062 Deallocated Guard Field: 0xFFFF 00:35:53.062 Flush: Supported 00:35:53.062 Reservation: Not Supported 00:35:53.062 Namespace Sharing Capabilities: Multiple Controllers 00:35:53.062 Size (in LBAs): 3750748848 (1788GiB) 00:35:53.062 Capacity (in LBAs): 3750748848 (1788GiB) 00:35:53.062 Utilization (in LBAs): 3750748848 (1788GiB) 00:35:53.062 UUID: d2a8836d-8f79-4edd-8c79-59300313dcd4 00:35:53.062 Thin Provisioning: Not Supported 00:35:53.062 Per-NS Atomic Units: Yes 00:35:53.062 Atomic Write Unit (Normal): 8 00:35:53.062 Atomic Write Unit (PFail): 8 00:35:53.062 Preferred Write Granularity: 8 00:35:53.062 Atomic Compare & Write Unit: 8 00:35:53.062 Atomic Boundary Size (Normal): 0 00:35:53.062 Atomic Boundary Size (PFail): 0 00:35:53.062 Atomic Boundary Offset: 0 00:35:53.062 NGUID/EUI64 Never Reused: No 00:35:53.062 ANA group ID: 1 00:35:53.062 Namespace Write Protected: No 00:35:53.062 Number of LBA Formats: 1 00:35:53.062 Current LBA Format: LBA Format #00 00:35:53.062 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:53.062 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:53.062 rmmod nvme_tcp 00:35:53.062 rmmod nvme_fabrics 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.062 11:47:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.608 11:47:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:55.608 11:47:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:55.608 11:47:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:55.608 11:47:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:55.608 11:47:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:55.608 11:47:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:55.608 11:47:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:55.608 11:47:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:55.608 11:47:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:55.608 11:47:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:55.608 11:47:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:58.906 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:58.906 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:35:59.166 00:35:59.166 real 0m18.744s 00:35:59.166 user 0m4.837s 00:35:59.166 sys 0m10.861s 00:35:59.166 11:47:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:59.166 11:47:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:59.166 ************************************ 00:35:59.166 END TEST nvmf_identify_kernel_target 00:35:59.166 ************************************ 00:35:59.166 11:47:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:59.166 11:47:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:59.166 11:47:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:59.166 11:47:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.166 ************************************ 00:35:59.166 START TEST nvmf_auth_host 00:35:59.166 ************************************ 00:35:59.166 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:59.166 * Looking for test storage... 00:35:59.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:59.166 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:59.166 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:35:59.166 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:59.426 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:59.426 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:59.426 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:59.426 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:59.426 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:59.426 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:59.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.427 --rc genhtml_branch_coverage=1 00:35:59.427 --rc genhtml_function_coverage=1 00:35:59.427 --rc genhtml_legend=1 00:35:59.427 --rc geninfo_all_blocks=1 00:35:59.427 --rc geninfo_unexecuted_blocks=1 00:35:59.427 00:35:59.427 ' 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:59.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.427 --rc genhtml_branch_coverage=1 00:35:59.427 --rc genhtml_function_coverage=1 00:35:59.427 --rc genhtml_legend=1 00:35:59.427 --rc geninfo_all_blocks=1 00:35:59.427 --rc geninfo_unexecuted_blocks=1 00:35:59.427 00:35:59.427 ' 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:59.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.427 --rc genhtml_branch_coverage=1 00:35:59.427 --rc genhtml_function_coverage=1 00:35:59.427 --rc genhtml_legend=1 00:35:59.427 --rc geninfo_all_blocks=1 00:35:59.427 --rc geninfo_unexecuted_blocks=1 00:35:59.427 00:35:59.427 ' 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:59.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.427 --rc genhtml_branch_coverage=1 00:35:59.427 --rc genhtml_function_coverage=1 00:35:59.427 --rc genhtml_legend=1 00:35:59.427 --rc geninfo_all_blocks=1 00:35:59.427 --rc geninfo_unexecuted_blocks=1 00:35:59.427 00:35:59.427 ' 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:59.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:59.427 11:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:07.565 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:07.566 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:07.566 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:07.566 Found net devices under 0000:31:00.0: cvl_0_0 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:07.566 Found net devices under 0000:31:00.1: cvl_0_1 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:07.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:07.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:36:07.566 00:36:07.566 --- 10.0.0.2 ping statistics --- 00:36:07.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:07.566 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:07.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:07.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:36:07.566 00:36:07.566 --- 10.0.0.1 ping statistics --- 00:36:07.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:07.566 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:07.566 11:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:07.566 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:07.566 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:07.566 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:07.566 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.566 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2751999 00:36:07.566 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2751999 00:36:07.566 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:07.566 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2751999 ']' 00:36:07.566 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:07.566 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:07.567 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:07.567 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:07.567 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.567 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:07.567 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:07.567 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:07.567 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:07.567 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.567 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:07.567 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=14ce761d98e78fdc21fe7fd48f548305 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.M7q 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 14ce761d98e78fdc21fe7fd48f548305 0 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 14ce761d98e78fdc21fe7fd48f548305 0 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=14ce761d98e78fdc21fe7fd48f548305 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.M7q 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.M7q 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.M7q 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e39232b3259ac8de8311a1ed869705a08f5040cd6aeb461a065c159ae753814c 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oeX 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e39232b3259ac8de8311a1ed869705a08f5040cd6aeb461a065c159ae753814c 3 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e39232b3259ac8de8311a1ed869705a08f5040cd6aeb461a065c159ae753814c 3 00:36:07.827 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:07.828 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:07.828 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e39232b3259ac8de8311a1ed869705a08f5040cd6aeb461a065c159ae753814c 00:36:07.828 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:07.828 11:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oeX 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oeX 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.oeX 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bf1e54677bab4b08076729c0cac3c71abc7ec037f13ea979 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cDT 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bf1e54677bab4b08076729c0cac3c71abc7ec037f13ea979 0 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bf1e54677bab4b08076729c0cac3c71abc7ec037f13ea979 0 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bf1e54677bab4b08076729c0cac3c71abc7ec037f13ea979 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cDT 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cDT 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.cDT 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a75d186b6738e755ab5c5c248d60c01f371b359da8086257 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.diE 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a75d186b6738e755ab5c5c248d60c01f371b359da8086257 2 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a75d186b6738e755ab5c5c248d60c01f371b359da8086257 2 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a75d186b6738e755ab5c5c248d60c01f371b359da8086257 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.diE 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.diE 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.diE 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=23437e83246d0fb70c6b8f3aabf413f4 00:36:07.828 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.m8X 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 23437e83246d0fb70c6b8f3aabf413f4 1 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 23437e83246d0fb70c6b8f3aabf413f4 1 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=23437e83246d0fb70c6b8f3aabf413f4 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.m8X 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.m8X 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.m8X 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4c77436bb703c972f3f9c38f320fb75f 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.iY2 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4c77436bb703c972f3f9c38f320fb75f 1 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4c77436bb703c972f3f9c38f320fb75f 1 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4c77436bb703c972f3f9c38f320fb75f 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.iY2 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.iY2 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.iY2 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c3354ee9493a7993f11a5f8f53e9593d919e76c932731c41 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.849 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c3354ee9493a7993f11a5f8f53e9593d919e76c932731c41 2 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c3354ee9493a7993f11a5f8f53e9593d919e76c932731c41 2 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c3354ee9493a7993f11a5f8f53e9593d919e76c932731c41 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.849 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.849 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.849 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:08.090 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3aaa63efec28a8798e9080a879b44885 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.t2F 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3aaa63efec28a8798e9080a879b44885 0 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3aaa63efec28a8798e9080a879b44885 0 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3aaa63efec28a8798e9080a879b44885 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.t2F 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.t2F 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.t2F 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dabb76e8c96af9854b7da86be3dbca2314bd215fc69266cd53cb217bf72e0628 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Eoc 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dabb76e8c96af9854b7da86be3dbca2314bd215fc69266cd53cb217bf72e0628 3 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dabb76e8c96af9854b7da86be3dbca2314bd215fc69266cd53cb217bf72e0628 3 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dabb76e8c96af9854b7da86be3dbca2314bd215fc69266cd53cb217bf72e0628 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:08.091 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Eoc 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Eoc 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Eoc 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2751999 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2751999 ']' 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.M7q 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.oeX ]] 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oeX 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.cDT 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.diE ]] 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.diE 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.m8X 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.352 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.iY2 ]] 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iY2 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.849 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.t2F ]] 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.t2F 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Eoc 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:08.614 11:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:11.919 Waiting for block devices as requested 00:36:11.919 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:11.919 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:12.179 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:12.179 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:12.179 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:12.439 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:12.439 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:12.439 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:12.699 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:12.699 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:12.699 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:12.960 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:12.960 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:12.960 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:13.221 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:13.221 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:13.221 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:14.165 No valid GPT data, bailing 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:14.165 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:14.427 00:36:14.427 Discovery Log Number of Records 2, Generation counter 2 00:36:14.427 =====Discovery Log Entry 0====== 00:36:14.427 trtype: tcp 00:36:14.427 adrfam: ipv4 00:36:14.427 subtype: current discovery subsystem 00:36:14.427 treq: not specified, sq flow control disable supported 00:36:14.427 portid: 1 00:36:14.427 trsvcid: 4420 00:36:14.427 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:14.427 traddr: 10.0.0.1 00:36:14.427 eflags: none 00:36:14.427 sectype: none 00:36:14.427 =====Discovery Log Entry 1====== 00:36:14.427 trtype: tcp 00:36:14.427 adrfam: ipv4 00:36:14.427 subtype: nvme subsystem 00:36:14.427 treq: not specified, sq flow control disable supported 00:36:14.427 portid: 1 00:36:14.427 trsvcid: 4420 00:36:14.427 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:14.427 traddr: 10.0.0.1 00:36:14.427 eflags: none 00:36:14.427 sectype: none 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.427 nvme0n1 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.427 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.428 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.428 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.428 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.428 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.428 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.689 nvme0n1 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.689 11:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:14.689 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.690 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:14.690 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.690 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.950 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.950 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.950 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.950 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.950 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.950 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.950 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.950 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.950 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.950 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.950 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.950 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.951 nvme0n1 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.951 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.212 nvme0n1 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.212 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.473 nvme0n1 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.473 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.474 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.735 nvme0n1 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.735 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.736 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.736 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.736 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.736 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.736 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:15.736 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.736 11:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.000 nvme0n1 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.000 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.261 nvme0n1 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.261 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.522 nvme0n1 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.522 11:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.782 nvme0n1 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:16.782 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.783 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.043 nvme0n1 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.043 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.044 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.044 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.044 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.044 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.044 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.044 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.044 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.044 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.044 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:17.044 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.044 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.303 nvme0n1 00:36:17.304 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.304 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.304 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.304 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.304 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:17.564 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.565 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.826 nvme0n1 00:36:17.826 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.826 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.826 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.826 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.826 11:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.826 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.827 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:17.827 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.827 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.088 nvme0n1 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.088 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.350 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.611 nvme0n1 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.611 11:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.873 nvme0n1 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.873 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.444 nvme0n1 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:19.444 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.445 11:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.017 nvme0n1 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.017 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.588 nvme0n1 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:20.588 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.589 11:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.160 nvme0n1 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:21.160 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.161 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.732 nvme0n1 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.732 11:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.674 nvme0n1 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.674 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:22.675 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:22.675 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:22.675 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:22.675 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.675 11:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.246 nvme0n1 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:23.246 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:23.247 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.247 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:23.247 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.247 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.507 11:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.077 nvme0n1 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.077 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.336 11:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.905 nvme0n1 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.905 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.843 nvme0n1 00:36:25.843 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.843 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.843 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.843 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.843 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.843 11:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.843 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.115 nvme0n1 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:26.115 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.116 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.375 nvme0n1 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.375 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.376 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.376 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.376 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.376 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.376 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.376 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.376 nvme0n1 00:36:26.376 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.376 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.376 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.376 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.376 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.376 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.636 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.637 nvme0n1 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.637 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.897 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.897 11:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:26.897 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.898 nvme0n1 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.898 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.158 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.159 nvme0n1 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.159 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.419 nvme0n1 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.419 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.679 11:48:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.679 nvme0n1 00:36:27.679 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.679 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.679 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.680 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.680 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:27.975 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.976 nvme0n1 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.976 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.236 nvme0n1 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.236 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.496 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.497 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.756 nvme0n1 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:28.756 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.757 11:48:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.757 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.016 nvme0n1 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:29.016 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.017 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.276 nvme0n1 00:36:29.276 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.276 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.276 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.276 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.276 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.537 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.798 nvme0n1 00:36:29.798 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.798 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.798 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.798 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.798 11:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.798 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.799 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.799 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.799 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.799 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.799 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.799 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.799 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:29.799 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.799 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.059 nvme0n1 00:36:30.059 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.059 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.059 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.059 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.059 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.059 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.059 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.059 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.059 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.059 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.319 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.319 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:30.319 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.320 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.581 nvme0n1 00:36:30.581 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.581 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.581 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.581 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.581 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.581 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.842 11:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.415 nvme0n1 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.415 11:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.676 nvme0n1 00:36:31.676 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.676 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.676 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.676 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.676 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:31.937 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.938 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.511 nvme0n1 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:32.511 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.512 11:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.772 nvme0n1 00:36:32.772 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.772 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.772 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.772 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.772 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:33.033 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.034 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.740 nvme0n1 00:36:33.740 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.740 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.740 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.740 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.740 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.740 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.740 11:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.740 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.684 nvme0n1 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.684 11:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.625 nvme0n1 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:35.625 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.626 11:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.198 nvme0n1 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:36.198 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.199 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.459 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:36.459 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.459 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:36.459 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:36.459 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:36.459 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:36.459 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.459 11:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.029 nvme0n1 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:37.029 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:37.030 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:37.030 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.030 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.030 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:37.030 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:37.030 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.030 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:37.030 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.030 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.289 nvme0n1 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.289 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.290 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.550 nvme0n1 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.550 11:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.812 nvme0n1 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.812 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.813 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.813 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.813 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.813 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:37.813 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.813 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:37.813 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:37.813 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:37.813 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:37.813 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.813 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.074 nvme0n1 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.074 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.075 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.336 nvme0n1 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:38.336 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:38.337 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:38.337 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.337 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.598 nvme0n1 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.598 11:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.859 nvme0n1 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:38.859 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:38.860 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.860 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.119 nvme0n1 00:36:39.119 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.119 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.119 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.119 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.119 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.119 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.119 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.119 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.120 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.380 nvme0n1 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.380 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.641 nvme0n1 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:39.641 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.642 11:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.903 nvme0n1 00:36:39.903 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.903 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.903 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.903 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.903 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.903 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.164 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.164 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.164 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.164 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.165 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.428 nvme0n1 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.428 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.691 nvme0n1 00:36:40.691 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.691 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.691 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.691 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.691 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.691 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.691 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.691 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.691 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.691 11:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.691 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.952 nvme0n1 00:36:40.952 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.215 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.476 nvme0n1 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.476 11:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.047 nvme0n1 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.047 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.618 nvme0n1 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.618 11:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.190 nvme0n1 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:43.190 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:43.191 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.191 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.763 nvme0n1 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.763 11:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.336 nvme0n1 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRjZTc2MWQ5OGU3OGZkYzIxZmU3ZmQ0OGY1NDgzMDUSmGNy: 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: ]] 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM5MjMyYjMyNTlhYzhkZTgzMTFhMWVkODY5NzA1YTA4ZjUwNDBjZDZhZWI0NjFhMDY1YzE1OWFlNzUzODE0Yz6RqfU=: 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.336 11:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.279 nvme0n1 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:45.279 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.280 11:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.852 nvme0n1 00:36:45.852 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.852 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.852 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.852 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.852 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.852 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.852 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.852 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.852 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.853 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.792 nvme0n1 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzMzNTRlZTk0OTNhNzk5M2YxMWE1ZjhmNTNlOTU5M2Q5MTllNzZjOTMyNzMxYzQxErmjDg==: 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: ]] 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FhYTYzZWZlYzI4YTg3OThlOTA4MGE4NzliNDQ4ODXBOJy0: 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.792 11:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.362 nvme0n1 00:36:47.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFiYjc2ZThjOTZhZjk4NTRiN2RhODZiZTNkYmNhMjMxNGJkMjE1ZmM2OTI2NmNkNTNjYjIxN2JmNzJlMDYyOIQU1no=: 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.622 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:47.623 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:47.623 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:47.623 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:47.623 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.623 11:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.562 nvme0n1 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.562 request: 00:36:48.562 { 00:36:48.562 "name": "nvme0", 00:36:48.562 "trtype": "tcp", 00:36:48.562 "traddr": "10.0.0.1", 00:36:48.562 "adrfam": "ipv4", 00:36:48.562 "trsvcid": "4420", 00:36:48.562 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:48.562 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:48.562 "prchk_reftag": false, 00:36:48.562 "prchk_guard": false, 00:36:48.562 "hdgst": false, 00:36:48.562 "ddgst": false, 00:36:48.562 "allow_unrecognized_csi": false, 00:36:48.562 "method": "bdev_nvme_attach_controller", 00:36:48.562 "req_id": 1 00:36:48.562 } 00:36:48.562 Got JSON-RPC error response 00:36:48.562 response: 00:36:48.562 { 00:36:48.562 "code": -5, 00:36:48.562 "message": "Input/output error" 00:36:48.562 } 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.562 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.563 request: 00:36:48.563 { 00:36:48.563 "name": "nvme0", 00:36:48.563 "trtype": "tcp", 00:36:48.563 "traddr": "10.0.0.1", 00:36:48.563 "adrfam": "ipv4", 00:36:48.563 "trsvcid": "4420", 00:36:48.563 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:48.563 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:48.563 "prchk_reftag": false, 00:36:48.563 "prchk_guard": false, 00:36:48.563 "hdgst": false, 00:36:48.563 "ddgst": false, 00:36:48.563 "dhchap_key": "key2", 00:36:48.563 "allow_unrecognized_csi": false, 00:36:48.563 "method": "bdev_nvme_attach_controller", 00:36:48.563 "req_id": 1 00:36:48.563 } 00:36:48.563 Got JSON-RPC error response 00:36:48.563 response: 00:36:48.563 { 00:36:48.563 "code": -5, 00:36:48.563 "message": "Input/output error" 00:36:48.563 } 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.563 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.825 request: 00:36:48.825 { 00:36:48.825 "name": "nvme0", 00:36:48.825 "trtype": "tcp", 00:36:48.825 "traddr": "10.0.0.1", 00:36:48.825 "adrfam": "ipv4", 00:36:48.825 "trsvcid": "4420", 00:36:48.825 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:48.825 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:48.825 "prchk_reftag": false, 00:36:48.825 "prchk_guard": false, 00:36:48.825 "hdgst": false, 00:36:48.825 "ddgst": false, 00:36:48.825 "dhchap_key": "key1", 00:36:48.825 "dhchap_ctrlr_key": "ckey2", 00:36:48.825 "allow_unrecognized_csi": false, 00:36:48.825 "method": "bdev_nvme_attach_controller", 00:36:48.825 "req_id": 1 00:36:48.825 } 00:36:48.825 Got JSON-RPC error response 00:36:48.825 response: 00:36:48.825 { 00:36:48.825 "code": -5, 00:36:48.825 "message": "Input/output error" 00:36:48.825 } 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.825 11:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.825 nvme0n1 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.825 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.087 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.088 request: 00:36:49.088 { 00:36:49.088 "name": "nvme0", 00:36:49.088 "dhchap_key": "key1", 00:36:49.088 "dhchap_ctrlr_key": "ckey2", 00:36:49.088 "method": "bdev_nvme_set_keys", 00:36:49.088 "req_id": 1 00:36:49.088 } 00:36:49.088 Got JSON-RPC error response 00:36:49.088 response: 00:36:49.088 { 00:36:49.088 "code": -13, 00:36:49.088 "message": "Permission denied" 00:36:49.088 } 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:49.088 11:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:50.031 11:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.031 11:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:50.031 11:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.031 11:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.031 11:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.291 11:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:50.291 11:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmYxZTU0Njc3YmFiNGIwODA3NjcyOWMwY2FjM2M3MWFiYzdlYzAzN2YxM2VhOTc5+cwctQ==: 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: ]] 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTc1ZDE4NmI2NzM4ZTc1NWFiNWM1YzI0OGQ2MGMwMWYzNzFiMzU5ZGE4MDg2MjU3gB5/5A==: 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.232 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.492 nvme0n1 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjM0MzdlODMyNDZkMGZiNzBjNmI4ZjNhYWJmNDEzZjRKu9l5: 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: ]] 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGM3NzQzNmJiNzAzYzk3MmYzZjljMzhmMzIwZmI3NWYZAlim: 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.492 request: 00:36:51.492 { 00:36:51.492 "name": "nvme0", 00:36:51.492 "dhchap_key": "key2", 00:36:51.492 "dhchap_ctrlr_key": "ckey1", 00:36:51.492 "method": "bdev_nvme_set_keys", 00:36:51.492 "req_id": 1 00:36:51.492 } 00:36:51.492 Got JSON-RPC error response 00:36:51.492 response: 00:36:51.492 { 00:36:51.492 "code": -13, 00:36:51.492 "message": "Permission denied" 00:36:51.492 } 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:51.492 11:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:52.434 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.434 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:52.434 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.434 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.434 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.434 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:52.434 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:52.434 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:52.434 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:52.434 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:52.434 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:52.694 rmmod nvme_tcp 00:36:52.694 rmmod nvme_fabrics 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2751999 ']' 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2751999 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2751999 ']' 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2751999 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2751999 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2751999' 00:36:52.694 killing process with pid 2751999 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2751999 00:36:52.694 11:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2751999 00:36:53.265 11:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:53.265 11:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:53.265 11:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:53.265 11:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:53.265 11:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:53.265 11:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:53.265 11:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:53.265 11:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:53.265 11:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:53.265 11:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:53.265 11:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:53.265 11:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:55.807 11:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:55.807 11:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:55.807 11:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:55.807 11:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:55.807 11:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:55.807 11:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:55.807 11:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:55.807 11:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:55.807 11:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:55.807 11:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:55.807 11:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:55.807 11:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:55.807 11:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:59.104 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:59.104 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:59.364 11:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.M7q /tmp/spdk.key-null.cDT /tmp/spdk.key-sha256.m8X /tmp/spdk.key-sha384.849 /tmp/spdk.key-sha512.Eoc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:59.364 11:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:03.572 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:03.572 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:03.572 00:37:03.572 real 1m4.047s 00:37:03.572 user 0m57.659s 00:37:03.572 sys 0m16.060s 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.572 ************************************ 00:37:03.572 END TEST nvmf_auth_host 00:37:03.572 ************************************ 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.572 ************************************ 00:37:03.572 START TEST nvmf_digest 00:37:03.572 ************************************ 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:03.572 * Looking for test storage... 00:37:03.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:03.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.572 --rc genhtml_branch_coverage=1 00:37:03.572 --rc genhtml_function_coverage=1 00:37:03.572 --rc genhtml_legend=1 00:37:03.572 --rc geninfo_all_blocks=1 00:37:03.572 --rc geninfo_unexecuted_blocks=1 00:37:03.572 00:37:03.572 ' 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:03.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.572 --rc genhtml_branch_coverage=1 00:37:03.572 --rc genhtml_function_coverage=1 00:37:03.572 --rc genhtml_legend=1 00:37:03.572 --rc geninfo_all_blocks=1 00:37:03.572 --rc geninfo_unexecuted_blocks=1 00:37:03.572 00:37:03.572 ' 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:03.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.572 --rc genhtml_branch_coverage=1 00:37:03.572 --rc genhtml_function_coverage=1 00:37:03.572 --rc genhtml_legend=1 00:37:03.572 --rc geninfo_all_blocks=1 00:37:03.572 --rc geninfo_unexecuted_blocks=1 00:37:03.572 00:37:03.572 ' 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:03.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.572 --rc genhtml_branch_coverage=1 00:37:03.572 --rc genhtml_function_coverage=1 00:37:03.572 --rc genhtml_legend=1 00:37:03.572 --rc geninfo_all_blocks=1 00:37:03.572 --rc geninfo_unexecuted_blocks=1 00:37:03.572 00:37:03.572 ' 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:03.572 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:03.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:37:03.573 11:49:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:11.707 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:11.708 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:11.708 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:11.708 Found net devices under 0000:31:00.0: cvl_0_0 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:11.708 Found net devices under 0000:31:00.1: cvl_0_1 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:11.708 11:49:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:11.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:11.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:37:11.708 00:37:11.708 --- 10.0.0.2 ping statistics --- 00:37:11.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.708 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:11.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:11.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:37:11.708 00:37:11.708 --- 10.0.0.1 ping statistics --- 00:37:11.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.708 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:11.708 ************************************ 00:37:11.708 START TEST nvmf_digest_clean 00:37:11.708 ************************************ 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2770200 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2770200 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2770200 ']' 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.708 [2024-12-07 11:49:10.203760] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:11.708 [2024-12-07 11:49:10.203873] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:11.708 [2024-12-07 11:49:10.336124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.708 [2024-12-07 11:49:10.432068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:11.708 [2024-12-07 11:49:10.432110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:11.708 [2024-12-07 11:49:10.432122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:11.708 [2024-12-07 11:49:10.432133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:11.708 [2024-12-07 11:49:10.432144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:11.708 [2024-12-07 11:49:10.433329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.708 11:49:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.968 null0 00:37:11.968 [2024-12-07 11:49:11.253189] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:11.968 [2024-12-07 11:49:11.277453] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:11.968 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.968 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:11.968 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:11.968 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:11.968 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:11.968 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:11.968 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:11.968 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:11.968 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2770285 00:37:11.968 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2770285 /var/tmp/bperf.sock 00:37:11.968 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2770285 ']' 00:37:11.968 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:11.969 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:11.969 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:11.969 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:11.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:11.969 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:11.969 11:49:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:12.230 [2024-12-07 11:49:11.362395] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:12.230 [2024-12-07 11:49:11.362501] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2770285 ] 00:37:12.230 [2024-12-07 11:49:11.502781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.492 [2024-12-07 11:49:11.600810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:13.067 11:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:13.067 11:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:13.067 11:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:13.067 11:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:13.067 11:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:13.327 11:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:13.327 11:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:13.586 nvme0n1 00:37:13.586 11:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:13.586 11:49:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:13.845 Running I/O for 2 seconds... 00:37:15.722 17204.00 IOPS, 67.20 MiB/s [2024-12-07T10:49:15.076Z] 17674.50 IOPS, 69.04 MiB/s 00:37:15.722 Latency(us) 00:37:15.722 [2024-12-07T10:49:15.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.722 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:15.722 nvme0n1 : 2.00 17707.74 69.17 0.00 0.00 7220.62 2812.59 22500.69 00:37:15.722 [2024-12-07T10:49:15.076Z] =================================================================================================================== 00:37:15.722 [2024-12-07T10:49:15.076Z] Total : 17707.74 69.17 0.00 0.00 7220.62 2812.59 22500.69 00:37:15.722 { 00:37:15.722 "results": [ 00:37:15.722 { 00:37:15.722 "job": "nvme0n1", 00:37:15.722 "core_mask": "0x2", 00:37:15.722 "workload": "randread", 00:37:15.722 "status": "finished", 00:37:15.722 "queue_depth": 128, 00:37:15.722 "io_size": 4096, 00:37:15.722 "runtime": 2.003474, 00:37:15.722 "iops": 17707.741652749173, 00:37:15.722 "mibps": 69.17086583105146, 00:37:15.722 "io_failed": 0, 00:37:15.722 "io_timeout": 0, 00:37:15.722 "avg_latency_us": 7220.623849818193, 00:37:15.722 "min_latency_us": 2812.5866666666666, 00:37:15.722 "max_latency_us": 22500.693333333333 00:37:15.722 } 00:37:15.722 ], 00:37:15.722 "core_count": 1 00:37:15.722 } 00:37:15.722 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:15.722 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:15.722 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:15.722 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:15.722 | select(.opcode=="crc32c") 00:37:15.722 | "\(.module_name) \(.executed)"' 00:37:15.722 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2770285 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2770285 ']' 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2770285 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2770285 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2770285' 00:37:15.989 killing process with pid 2770285 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2770285 00:37:15.989 Received shutdown signal, test time was about 2.000000 seconds 00:37:15.989 00:37:15.989 Latency(us) 00:37:15.989 [2024-12-07T10:49:15.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.989 [2024-12-07T10:49:15.343Z] =================================================================================================================== 00:37:15.989 [2024-12-07T10:49:15.343Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:15.989 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2770285 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2771282 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2771282 /var/tmp/bperf.sock 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2771282 ']' 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:16.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:16.560 11:49:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:16.560 [2024-12-07 11:49:15.795779] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:16.560 [2024-12-07 11:49:15.795889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771282 ] 00:37:16.560 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:16.560 Zero copy mechanism will not be used. 00:37:16.819 [2024-12-07 11:49:15.929341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.819 [2024-12-07 11:49:16.003787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.389 11:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:17.389 11:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:17.389 11:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:17.389 11:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:17.389 11:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:17.648 11:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:17.649 11:49:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:17.909 nvme0n1 00:37:17.909 11:49:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:17.909 11:49:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:17.909 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:17.909 Zero copy mechanism will not be used. 00:37:17.909 Running I/O for 2 seconds... 00:37:20.236 3686.00 IOPS, 460.75 MiB/s [2024-12-07T10:49:19.590Z] 3560.00 IOPS, 445.00 MiB/s 00:37:20.236 Latency(us) 00:37:20.236 [2024-12-07T10:49:19.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.236 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:20.236 nvme0n1 : 2.00 3562.31 445.29 0.00 0.00 4488.32 761.17 14527.15 00:37:20.236 [2024-12-07T10:49:19.590Z] =================================================================================================================== 00:37:20.236 [2024-12-07T10:49:19.590Z] Total : 3562.31 445.29 0.00 0.00 4488.32 761.17 14527.15 00:37:20.236 { 00:37:20.236 "results": [ 00:37:20.236 { 00:37:20.236 "job": "nvme0n1", 00:37:20.236 "core_mask": "0x2", 00:37:20.236 "workload": "randread", 00:37:20.236 "status": "finished", 00:37:20.236 "queue_depth": 16, 00:37:20.236 "io_size": 131072, 00:37:20.237 "runtime": 2.003192, 00:37:20.237 "iops": 3562.3145459846087, 00:37:20.237 "mibps": 445.2893182480761, 00:37:20.237 "io_failed": 0, 00:37:20.237 "io_timeout": 0, 00:37:20.237 "avg_latency_us": 4488.316412556053, 00:37:20.237 "min_latency_us": 761.1733333333333, 00:37:20.237 "max_latency_us": 14527.146666666667 00:37:20.237 } 00:37:20.237 ], 00:37:20.237 "core_count": 1 00:37:20.237 } 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:20.237 | select(.opcode=="crc32c") 00:37:20.237 | "\(.module_name) \(.executed)"' 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2771282 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2771282 ']' 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2771282 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2771282 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2771282' 00:37:20.237 killing process with pid 2771282 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2771282 00:37:20.237 Received shutdown signal, test time was about 2.000000 seconds 00:37:20.237 00:37:20.237 Latency(us) 00:37:20.237 [2024-12-07T10:49:19.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.237 [2024-12-07T10:49:19.591Z] =================================================================================================================== 00:37:20.237 [2024-12-07T10:49:19.591Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:20.237 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2771282 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2771971 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2771971 /var/tmp/bperf.sock 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2771971 ']' 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:20.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:20.808 11:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:20.808 [2024-12-07 11:49:20.046397] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:20.808 [2024-12-07 11:49:20.046507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771971 ] 00:37:21.070 [2024-12-07 11:49:20.184115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:21.070 [2024-12-07 11:49:20.258952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.643 11:49:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:21.643 11:49:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:21.643 11:49:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:21.643 11:49:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:21.643 11:49:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:21.904 11:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:21.904 11:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:22.165 nvme0n1 00:37:22.165 11:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:22.165 11:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:22.426 Running I/O for 2 seconds... 00:37:24.421 19627.00 IOPS, 76.67 MiB/s [2024-12-07T10:49:23.775Z] 19691.50 IOPS, 76.92 MiB/s 00:37:24.421 Latency(us) 00:37:24.421 [2024-12-07T10:49:23.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:24.421 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:24.421 nvme0n1 : 2.00 19727.18 77.06 0.00 0.00 6482.35 2211.84 11468.80 00:37:24.421 [2024-12-07T10:49:23.775Z] =================================================================================================================== 00:37:24.421 [2024-12-07T10:49:23.775Z] Total : 19727.18 77.06 0.00 0.00 6482.35 2211.84 11468.80 00:37:24.421 { 00:37:24.421 "results": [ 00:37:24.421 { 00:37:24.421 "job": "nvme0n1", 00:37:24.421 "core_mask": "0x2", 00:37:24.421 "workload": "randwrite", 00:37:24.421 "status": "finished", 00:37:24.421 "queue_depth": 128, 00:37:24.421 "io_size": 4096, 00:37:24.421 "runtime": 2.002871, 00:37:24.421 "iops": 19727.18163076903, 00:37:24.421 "mibps": 77.05930324519153, 00:37:24.421 "io_failed": 0, 00:37:24.421 "io_timeout": 0, 00:37:24.421 "avg_latency_us": 6482.353588620891, 00:37:24.421 "min_latency_us": 2211.84, 00:37:24.421 "max_latency_us": 11468.8 00:37:24.421 } 00:37:24.421 ], 00:37:24.421 "core_count": 1 00:37:24.421 } 00:37:24.421 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:24.421 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:24.421 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:24.421 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:24.421 | select(.opcode=="crc32c") 00:37:24.421 | "\(.module_name) \(.executed)"' 00:37:24.421 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2771971 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2771971 ']' 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2771971 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2771971 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2771971' 00:37:24.682 killing process with pid 2771971 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2771971 00:37:24.682 Received shutdown signal, test time was about 2.000000 seconds 00:37:24.682 00:37:24.682 Latency(us) 00:37:24.682 [2024-12-07T10:49:24.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:24.682 [2024-12-07T10:49:24.036Z] =================================================================================================================== 00:37:24.682 [2024-12-07T10:49:24.036Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:24.682 11:49:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2771971 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2772832 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2772832 /var/tmp/bperf.sock 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2772832 ']' 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:25.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:25.254 11:49:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:25.254 [2024-12-07 11:49:24.389476] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:25.254 [2024-12-07 11:49:24.389586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2772832 ] 00:37:25.254 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:25.254 Zero copy mechanism will not be used. 00:37:25.254 [2024-12-07 11:49:24.521757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.254 [2024-12-07 11:49:24.596599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.831 11:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:25.831 11:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:25.831 11:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:25.831 11:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:25.831 11:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:26.400 11:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:26.400 11:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:26.400 nvme0n1 00:37:26.660 11:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:26.660 11:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:26.660 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:26.660 Zero copy mechanism will not be used. 00:37:26.660 Running I/O for 2 seconds... 00:37:28.549 3304.00 IOPS, 413.00 MiB/s [2024-12-07T10:49:27.903Z] 3792.50 IOPS, 474.06 MiB/s 00:37:28.549 Latency(us) 00:37:28.549 [2024-12-07T10:49:27.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.549 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:28.549 nvme0n1 : 2.01 3793.34 474.17 0.00 0.00 4211.76 1611.09 6608.21 00:37:28.549 [2024-12-07T10:49:27.903Z] =================================================================================================================== 00:37:28.549 [2024-12-07T10:49:27.903Z] Total : 3793.34 474.17 0.00 0.00 4211.76 1611.09 6608.21 00:37:28.549 { 00:37:28.549 "results": [ 00:37:28.549 { 00:37:28.549 "job": "nvme0n1", 00:37:28.549 "core_mask": "0x2", 00:37:28.549 "workload": "randwrite", 00:37:28.549 "status": "finished", 00:37:28.549 "queue_depth": 16, 00:37:28.549 "io_size": 131072, 00:37:28.549 "runtime": 2.005093, 00:37:28.549 "iops": 3793.340259030379, 00:37:28.549 "mibps": 474.1675323787974, 00:37:28.549 "io_failed": 0, 00:37:28.549 "io_timeout": 0, 00:37:28.549 "avg_latency_us": 4211.761185029362, 00:37:28.549 "min_latency_us": 1611.0933333333332, 00:37:28.549 "max_latency_us": 6608.213333333333 00:37:28.549 } 00:37:28.549 ], 00:37:28.549 "core_count": 1 00:37:28.549 } 00:37:28.549 11:49:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:28.549 11:49:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:28.549 11:49:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:28.549 11:49:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:28.549 | select(.opcode=="crc32c") 00:37:28.549 | "\(.module_name) \(.executed)"' 00:37:28.549 11:49:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2772832 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2772832 ']' 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2772832 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2772832 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2772832' 00:37:28.809 killing process with pid 2772832 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2772832 00:37:28.809 Received shutdown signal, test time was about 2.000000 seconds 00:37:28.809 00:37:28.809 Latency(us) 00:37:28.809 [2024-12-07T10:49:28.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.809 [2024-12-07T10:49:28.163Z] =================================================================================================================== 00:37:28.809 [2024-12-07T10:49:28.163Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:28.809 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2772832 00:37:29.379 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2770200 00:37:29.379 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2770200 ']' 00:37:29.379 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2770200 00:37:29.379 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:29.379 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:29.379 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2770200 00:37:29.379 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:29.379 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:29.379 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2770200' 00:37:29.379 killing process with pid 2770200 00:37:29.379 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2770200 00:37:29.379 11:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2770200 00:37:30.319 00:37:30.319 real 0m19.301s 00:37:30.319 user 0m37.192s 00:37:30.319 sys 0m3.769s 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:30.319 ************************************ 00:37:30.319 END TEST nvmf_digest_clean 00:37:30.319 ************************************ 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:30.319 ************************************ 00:37:30.319 START TEST nvmf_digest_error 00:37:30.319 ************************************ 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2773790 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2773790 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2773790 ']' 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:30.319 11:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:30.319 [2024-12-07 11:49:29.600450] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:30.319 [2024-12-07 11:49:29.600582] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:30.580 [2024-12-07 11:49:29.752078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.580 [2024-12-07 11:49:29.851645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:30.580 [2024-12-07 11:49:29.851688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:30.580 [2024-12-07 11:49:29.851700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:30.580 [2024-12-07 11:49:29.851714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:30.580 [2024-12-07 11:49:29.851726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:30.580 [2024-12-07 11:49:29.852930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:31.151 [2024-12-07 11:49:30.394810] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.151 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:31.411 null0 00:37:31.411 [2024-12-07 11:49:30.665123] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:31.411 [2024-12-07 11:49:30.689396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2774051 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2774051 /var/tmp/bperf.sock 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2774051 ']' 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:31.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:31.411 11:49:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:31.672 [2024-12-07 11:49:30.782780] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:31.672 [2024-12-07 11:49:30.782883] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2774051 ] 00:37:31.672 [2024-12-07 11:49:30.915888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:31.672 [2024-12-07 11:49:30.991124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:32.243 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:32.243 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:32.243 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:32.243 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:32.503 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:32.503 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.503 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.503 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.503 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:32.503 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:32.764 nvme0n1 00:37:32.764 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:32.764 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.764 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.764 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.764 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:32.764 11:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:32.764 Running I/O for 2 seconds... 00:37:32.764 [2024-12-07 11:49:32.051592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:32.764 [2024-12-07 11:49:32.051632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.764 [2024-12-07 11:49:32.051645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.764 [2024-12-07 11:49:32.062955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:32.764 [2024-12-07 11:49:32.062982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.764 [2024-12-07 11:49:32.062994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.764 [2024-12-07 11:49:32.077999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:32.764 [2024-12-07 11:49:32.078034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.764 [2024-12-07 11:49:32.078044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.764 [2024-12-07 11:49:32.092808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:32.764 [2024-12-07 11:49:32.092832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.764 [2024-12-07 11:49:32.092841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.764 [2024-12-07 11:49:32.108251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:32.764 [2024-12-07 11:49:32.108274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.764 [2024-12-07 11:49:32.108283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.025 [2024-12-07 11:49:32.121543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.121567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.121576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.136269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.136293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.136302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.147091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.147114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.147123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.161683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.161705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.161714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.174863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.174885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.174894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.189104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.189126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.189136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.203944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.203967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.203976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.218236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.218259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.218269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.233322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.233345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.233354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.246437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.246459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.246468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.257642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.257665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.257674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.272648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.272672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.272682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.287448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.287471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.287481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.302597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.302620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.302629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.315915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.315941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.315950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.328743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.328765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.328775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.343522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.343546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.343555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.358244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.358267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.358276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.026 [2024-12-07 11:49:32.372635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.026 [2024-12-07 11:49:32.372660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.026 [2024-12-07 11:49:32.372669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.288 [2024-12-07 11:49:32.387403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.288 [2024-12-07 11:49:32.387426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.288 [2024-12-07 11:49:32.387435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.288 [2024-12-07 11:49:32.401396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.288 [2024-12-07 11:49:32.401419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.288 [2024-12-07 11:49:32.401428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.288 [2024-12-07 11:49:32.414752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.288 [2024-12-07 11:49:32.414774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.288 [2024-12-07 11:49:32.414783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.288 [2024-12-07 11:49:32.427178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.288 [2024-12-07 11:49:32.427200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.288 [2024-12-07 11:49:32.427209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.288 [2024-12-07 11:49:32.442806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.288 [2024-12-07 11:49:32.442830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.288 [2024-12-07 11:49:32.442839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.288 [2024-12-07 11:49:32.457061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.288 [2024-12-07 11:49:32.457091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.288 [2024-12-07 11:49:32.457100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.288 [2024-12-07 11:49:32.468170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.288 [2024-12-07 11:49:32.468192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.288 [2024-12-07 11:49:32.468201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.288 [2024-12-07 11:49:32.483350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.288 [2024-12-07 11:49:32.483372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.288 [2024-12-07 11:49:32.483381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.288 [2024-12-07 11:49:32.498638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.288 [2024-12-07 11:49:32.498660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.288 [2024-12-07 11:49:32.498669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.288 [2024-12-07 11:49:32.511384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.288 [2024-12-07 11:49:32.511407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.288 [2024-12-07 11:49:32.511416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.289 [2024-12-07 11:49:32.524634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.289 [2024-12-07 11:49:32.524657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.289 [2024-12-07 11:49:32.524665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.289 [2024-12-07 11:49:32.539657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.289 [2024-12-07 11:49:32.539680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.289 [2024-12-07 11:49:32.539689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.289 [2024-12-07 11:49:32.554973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.289 [2024-12-07 11:49:32.554999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.289 [2024-12-07 11:49:32.555008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.289 [2024-12-07 11:49:32.568222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.289 [2024-12-07 11:49:32.568245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.289 [2024-12-07 11:49:32.568254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.289 [2024-12-07 11:49:32.583110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.289 [2024-12-07 11:49:32.583132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.289 [2024-12-07 11:49:32.583141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.289 [2024-12-07 11:49:32.594098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.289 [2024-12-07 11:49:32.594120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.289 [2024-12-07 11:49:32.594129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.289 [2024-12-07 11:49:32.608835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.289 [2024-12-07 11:49:32.608858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.289 [2024-12-07 11:49:32.608867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.289 [2024-12-07 11:49:32.622643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.289 [2024-12-07 11:49:32.622666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.289 [2024-12-07 11:49:32.622676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.289 [2024-12-07 11:49:32.638237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.289 [2024-12-07 11:49:32.638261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.289 [2024-12-07 11:49:32.638270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.552 [2024-12-07 11:49:32.654042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.552 [2024-12-07 11:49:32.654067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.552 [2024-12-07 11:49:32.654076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.552 [2024-12-07 11:49:32.668749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.552 [2024-12-07 11:49:32.668773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.552 [2024-12-07 11:49:32.668782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.552 [2024-12-07 11:49:32.679408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.552 [2024-12-07 11:49:32.679432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.552 [2024-12-07 11:49:32.679441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.552 [2024-12-07 11:49:32.694568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.552 [2024-12-07 11:49:32.694591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.552 [2024-12-07 11:49:32.694600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.552 [2024-12-07 11:49:32.708519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.552 [2024-12-07 11:49:32.708543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.552 [2024-12-07 11:49:32.708552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.552 [2024-12-07 11:49:32.723210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.552 [2024-12-07 11:49:32.723233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.552 [2024-12-07 11:49:32.723242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.552 [2024-12-07 11:49:32.738527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.553 [2024-12-07 11:49:32.738551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.553 [2024-12-07 11:49:32.738560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.553 [2024-12-07 11:49:32.753291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.553 [2024-12-07 11:49:32.753315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.553 [2024-12-07 11:49:32.753324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.553 [2024-12-07 11:49:32.767340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.553 [2024-12-07 11:49:32.767362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.553 [2024-12-07 11:49:32.767371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.553 [2024-12-07 11:49:32.777985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.553 [2024-12-07 11:49:32.778008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.553 [2024-12-07 11:49:32.778022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.553 [2024-12-07 11:49:32.792256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.553 [2024-12-07 11:49:32.792282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.553 [2024-12-07 11:49:32.792291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.553 [2024-12-07 11:49:32.807536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.553 [2024-12-07 11:49:32.807559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.553 [2024-12-07 11:49:32.807568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.553 [2024-12-07 11:49:32.821814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.553 [2024-12-07 11:49:32.821837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.553 [2024-12-07 11:49:32.821846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.553 [2024-12-07 11:49:32.837149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.553 [2024-12-07 11:49:32.837172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.553 [2024-12-07 11:49:32.837180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.553 [2024-12-07 11:49:32.851314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.553 [2024-12-07 11:49:32.851338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.553 [2024-12-07 11:49:32.851347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.553 [2024-12-07 11:49:32.865566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.553 [2024-12-07 11:49:32.865589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.553 [2024-12-07 11:49:32.865598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.553 [2024-12-07 11:49:32.879300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.553 [2024-12-07 11:49:32.879323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.553 [2024-12-07 11:49:32.879334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.553 [2024-12-07 11:49:32.892156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.553 [2024-12-07 11:49:32.892179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.553 [2024-12-07 11:49:32.892188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.815 [2024-12-07 11:49:32.906239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.815 [2024-12-07 11:49:32.906262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.815 [2024-12-07 11:49:32.906272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.815 [2024-12-07 11:49:32.920057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.815 [2024-12-07 11:49:32.920080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.815 [2024-12-07 11:49:32.920089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.815 [2024-12-07 11:49:32.935536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.815 [2024-12-07 11:49:32.935559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.815 [2024-12-07 11:49:32.935568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.815 [2024-12-07 11:49:32.948798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.815 [2024-12-07 11:49:32.948822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.815 [2024-12-07 11:49:32.948831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:32.960486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:32.960508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:32.960517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:32.974269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:32.974293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:32.974302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:32.989597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:32.989620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:32.989629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:33.003190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:33.003215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:33.003225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:33.018182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:33.018205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:33.018215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 17992.00 IOPS, 70.28 MiB/s [2024-12-07T10:49:33.170Z] [2024-12-07 11:49:33.032765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:33.032793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:33.032802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:33.049365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:33.049388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:33.049398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:33.063289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:33.063312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:33.063321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:33.074268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:33.074291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:33.074300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:33.091126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:33.091150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:33.091159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:33.106002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:33.106030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:33.106040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:33.117028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:33.117051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:33.117060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:33.132385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:33.132408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:33.132418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:33.146625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:33.146648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:33.146657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.816 [2024-12-07 11:49:33.158855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:33.816 [2024-12-07 11:49:33.158878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.816 [2024-12-07 11:49:33.158887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.174048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.174070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.174080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.187447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.187470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.187479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.203386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.203409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.203418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.217074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.217097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.217106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.231766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.231789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.231798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.245087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.245110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.245120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.260618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.260642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.260651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.271530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.271556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.271565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.286605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.286629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.286638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.301496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.301519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.301528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.315360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.315383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.315392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.327187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.327210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.327219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.342174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.342196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.342205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.356162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.356185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.356194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.371159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.371181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.371190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.386088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.386110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.386120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.400612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.400635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.400645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.413785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.078 [2024-12-07 11:49:33.413808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.078 [2024-12-07 11:49:33.413818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.078 [2024-12-07 11:49:33.427204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.079 [2024-12-07 11:49:33.427227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.079 [2024-12-07 11:49:33.427237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.441994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.442024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.442034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.455549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.455572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.455581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.467023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.467046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.467055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.482377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.482399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.482409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.497096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.497119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.497128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.511529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.511552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.511567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.525020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.525043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.525052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.539729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.539752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.539762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.553606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.553629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.553638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.567204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.567227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.567236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.581185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.581208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.581217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.595803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.595825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.595834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.608614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.608637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.608646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.622520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.622543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.622552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.635803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.635826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.635835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.649338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.649360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.649375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.663156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.663178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.663187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.676822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.676844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.676854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.341 [2024-12-07 11:49:33.690220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.341 [2024-12-07 11:49:33.690242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.341 [2024-12-07 11:49:33.690251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.705104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.705128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.705137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.720048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.720071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.720080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.736040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.736063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.736072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.748036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.748059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.748071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.760642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.760664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.760673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.775928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.775952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.775960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.790560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.790583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.790592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.804966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.804990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.804999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.819262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.819284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.819293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.832605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.832628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.832637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.845935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.845958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.845966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.860438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.860461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.860470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.874679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.874702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.874710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.886299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.886321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.886329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.901430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.901453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.603 [2024-12-07 11:49:33.901462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.603 [2024-12-07 11:49:33.916150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.603 [2024-12-07 11:49:33.916174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.604 [2024-12-07 11:49:33.916183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.604 [2024-12-07 11:49:33.930949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.604 [2024-12-07 11:49:33.930972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.604 [2024-12-07 11:49:33.930981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.604 [2024-12-07 11:49:33.943950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.604 [2024-12-07 11:49:33.943972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.604 [2024-12-07 11:49:33.943981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.865 [2024-12-07 11:49:33.956268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.865 [2024-12-07 11:49:33.956291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.865 [2024-12-07 11:49:33.956300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.865 [2024-12-07 11:49:33.970970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.865 [2024-12-07 11:49:33.970992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.865 [2024-12-07 11:49:33.971001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.865 [2024-12-07 11:49:33.985264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.865 [2024-12-07 11:49:33.985288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.865 [2024-12-07 11:49:33.985300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.865 [2024-12-07 11:49:34.000601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.865 [2024-12-07 11:49:34.000624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.865 [2024-12-07 11:49:34.000633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.865 [2024-12-07 11:49:34.015719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.865 [2024-12-07 11:49:34.015742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.865 [2024-12-07 11:49:34.015751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.865 18092.00 IOPS, 70.67 MiB/s [2024-12-07T10:49:34.219Z] [2024-12-07 11:49:34.029571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:34.865 [2024-12-07 11:49:34.029590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.865 [2024-12-07 11:49:34.029599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.865 00:37:34.865 Latency(us) 00:37:34.865 [2024-12-07T10:49:34.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.865 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:34.865 nvme0n1 : 2.00 18104.83 70.72 0.00 0.00 7062.40 2402.99 22719.15 00:37:34.865 [2024-12-07T10:49:34.219Z] =================================================================================================================== 00:37:34.865 [2024-12-07T10:49:34.219Z] Total : 18104.83 70.72 0.00 0.00 7062.40 2402.99 22719.15 00:37:34.865 { 00:37:34.865 "results": [ 00:37:34.865 { 00:37:34.865 "job": "nvme0n1", 00:37:34.865 "core_mask": "0x2", 00:37:34.865 "workload": "randread", 00:37:34.865 "status": "finished", 00:37:34.865 "queue_depth": 128, 00:37:34.865 "io_size": 4096, 00:37:34.865 "runtime": 2.004106, 00:37:34.865 "iops": 18104.830782403726, 00:37:34.865 "mibps": 70.72199524376455, 00:37:34.865 "io_failed": 0, 00:37:34.865 "io_timeout": 0, 00:37:34.865 "avg_latency_us": 7062.40303384412, 00:37:34.865 "min_latency_us": 2402.9866666666667, 00:37:34.865 "max_latency_us": 22719.146666666667 00:37:34.865 } 00:37:34.865 ], 00:37:34.865 "core_count": 1 00:37:34.865 } 00:37:34.865 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:34.865 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:34.865 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:34.865 | .driver_specific 00:37:34.865 | .nvme_error 00:37:34.865 | .status_code 00:37:34.865 | .command_transient_transport_error' 00:37:34.865 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:35.128 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:37:35.128 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2774051 00:37:35.128 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2774051 ']' 00:37:35.128 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2774051 00:37:35.128 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:35.128 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:35.128 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2774051 00:37:35.128 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:35.128 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:35.128 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2774051' 00:37:35.128 killing process with pid 2774051 00:37:35.128 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2774051 00:37:35.128 Received shutdown signal, test time was about 2.000000 seconds 00:37:35.128 00:37:35.128 Latency(us) 00:37:35.128 [2024-12-07T10:49:34.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:35.128 [2024-12-07T10:49:34.482Z] =================================================================================================================== 00:37:35.128 [2024-12-07T10:49:34.482Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:35.128 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2774051 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2774763 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2774763 /var/tmp/bperf.sock 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2774763 ']' 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:35.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:35.699 11:49:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:35.699 [2024-12-07 11:49:34.824151] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:35.699 [2024-12-07 11:49:34.824259] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2774763 ] 00:37:35.699 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:35.699 Zero copy mechanism will not be used. 00:37:35.699 [2024-12-07 11:49:34.956007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.699 [2024-12-07 11:49:35.032286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.275 11:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:36.275 11:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:36.275 11:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:36.275 11:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:36.536 11:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:36.536 11:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.536 11:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:36.536 11:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.536 11:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:36.536 11:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:36.797 nvme0n1 00:37:36.797 11:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:36.797 11:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.797 11:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:36.797 11:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.797 11:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:36.797 11:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:36.797 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:36.797 Zero copy mechanism will not be used. 00:37:36.797 Running I/O for 2 seconds... 00:37:37.057 [2024-12-07 11:49:36.151238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.057 [2024-12-07 11:49:36.151284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.057 [2024-12-07 11:49:36.151297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.057 [2024-12-07 11:49:36.160792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.057 [2024-12-07 11:49:36.160826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.057 [2024-12-07 11:49:36.160838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.057 [2024-12-07 11:49:36.170054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.057 [2024-12-07 11:49:36.170085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.057 [2024-12-07 11:49:36.170095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.057 [2024-12-07 11:49:36.179620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.057 [2024-12-07 11:49:36.179648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.057 [2024-12-07 11:49:36.179658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.057 [2024-12-07 11:49:36.185925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.057 [2024-12-07 11:49:36.185951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.185961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.194595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.194620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.194630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.202629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.202654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.202663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.211535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.211560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.211569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.219479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.219504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.219513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.225037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.225060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.225068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.232086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.232110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.232119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.240712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.240735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.240744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.244138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.244161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.244174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.252208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.252232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.252241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.262325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.262349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.262358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.274291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.274316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.274325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.286490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.286513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.286522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.299273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.299299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.299309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.311719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.311742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.311752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.317201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.317224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.317234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.326578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.326602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.326611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.333385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.333408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.333418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.342559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.342582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.342591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.351383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.351406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.351416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.361181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.361203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.361213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.373867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.373890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.373899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.385174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.385197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.385206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.058 [2024-12-07 11:49:36.397879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.058 [2024-12-07 11:49:36.397901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.058 [2024-12-07 11:49:36.397910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.321 [2024-12-07 11:49:36.410391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.321 [2024-12-07 11:49:36.410414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.321 [2024-12-07 11:49:36.410423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.321 [2024-12-07 11:49:36.422629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.321 [2024-12-07 11:49:36.422661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.321 [2024-12-07 11:49:36.422672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.321 [2024-12-07 11:49:36.435036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.321 [2024-12-07 11:49:36.435059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.321 [2024-12-07 11:49:36.435069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.321 [2024-12-07 11:49:36.441744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.321 [2024-12-07 11:49:36.441768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.321 [2024-12-07 11:49:36.441777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.321 [2024-12-07 11:49:36.450703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.321 [2024-12-07 11:49:36.450726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.321 [2024-12-07 11:49:36.450735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.321 [2024-12-07 11:49:36.461197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.321 [2024-12-07 11:49:36.461220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.321 [2024-12-07 11:49:36.461229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.321 [2024-12-07 11:49:36.469936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.321 [2024-12-07 11:49:36.469959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.469969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.475598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.475622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.475631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.481004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.481033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.481042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.485886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.485909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.485918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.497303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.497327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.497336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.509217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.509241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.509250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.519377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.519400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.519409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.526718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.526740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.526749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.534296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.534319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.534328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.544375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.544398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.544407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.552119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.552142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.552151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.561702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.561726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.561735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.567035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.567058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.567070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.572105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.572128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.572137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.584155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.584179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.584188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.591297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.591322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.591330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.600757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.600780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.600788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.608503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.608527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.608535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.620308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.620332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.620341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.632132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.632156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.632165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.644236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.644260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.644269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.656421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.656444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.656453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.322 [2024-12-07 11:49:36.665630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.322 [2024-12-07 11:49:36.665654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.322 [2024-12-07 11:49:36.665663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.584 [2024-12-07 11:49:36.672656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.584 [2024-12-07 11:49:36.672681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.584 [2024-12-07 11:49:36.672689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.584 [2024-12-07 11:49:36.680723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.584 [2024-12-07 11:49:36.680747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.584 [2024-12-07 11:49:36.680756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.584 [2024-12-07 11:49:36.685998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.584 [2024-12-07 11:49:36.686027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.584 [2024-12-07 11:49:36.686036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.584 [2024-12-07 11:49:36.697717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.584 [2024-12-07 11:49:36.697741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.584 [2024-12-07 11:49:36.697750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.584 [2024-12-07 11:49:36.708676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.584 [2024-12-07 11:49:36.708701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.584 [2024-12-07 11:49:36.708710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.584 [2024-12-07 11:49:36.716024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.584 [2024-12-07 11:49:36.716047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.584 [2024-12-07 11:49:36.716056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.584 [2024-12-07 11:49:36.725644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.584 [2024-12-07 11:49:36.725668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.584 [2024-12-07 11:49:36.725681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.584 [2024-12-07 11:49:36.731475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.584 [2024-12-07 11:49:36.731498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.584 [2024-12-07 11:49:36.731507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.584 [2024-12-07 11:49:36.736812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.584 [2024-12-07 11:49:36.736837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.584 [2024-12-07 11:49:36.736846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.584 [2024-12-07 11:49:36.741686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.584 [2024-12-07 11:49:36.741710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.584 [2024-12-07 11:49:36.741719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.584 [2024-12-07 11:49:36.747101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.584 [2024-12-07 11:49:36.747125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.584 [2024-12-07 11:49:36.747134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.752290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.752314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.752323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.759191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.759216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.759224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.767190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.767214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.767223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.775255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.775280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.775289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.784680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.784705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.784714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.792628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.792651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.792660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.795599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.795622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.795631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.802609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.802633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.802642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.812336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.812360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.812369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.822940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.822964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.822973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.833447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.833470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.833479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.839695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.839719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.839728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.848557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.848581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.848593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.855423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.855447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.855456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.865738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.865762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.865771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.876393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.876417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.876426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.885287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.885311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.885320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.893806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.893830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.893838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.902374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.902398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.902406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.907669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.907691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.907700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.917408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.917432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.917441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.585 [2024-12-07 11:49:36.925840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.585 [2024-12-07 11:49:36.925864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.585 [2024-12-07 11:49:36.925873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:36.935645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:36.935669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:36.935677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:36.946003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:36.946034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:36.946043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:36.953274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:36.953298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:36.953306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:36.958063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:36.958087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:36.958096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:36.968043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:36.968067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:36.968076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:36.977475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:36.977499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:36.977508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:36.982867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:36.982891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:36.982900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:36.991100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:36.991125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:36.991137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:37.001557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:37.001581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:37.001590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:37.012351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:37.012377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:37.012388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:37.023309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:37.023332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:37.023341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:37.030383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:37.030407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:37.030416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:37.042502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:37.042525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:37.042534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:37.055329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:37.055352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:37.055361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:37.068078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:37.068101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:37.068110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:37.080001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.846 [2024-12-07 11:49:37.080029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.846 [2024-12-07 11:49:37.080038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.846 [2024-12-07 11:49:37.084380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.847 [2024-12-07 11:49:37.084408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.847 [2024-12-07 11:49:37.084416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.847 [2024-12-07 11:49:37.095135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.847 [2024-12-07 11:49:37.095159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.847 [2024-12-07 11:49:37.095168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.847 [2024-12-07 11:49:37.106130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.847 [2024-12-07 11:49:37.106154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.847 [2024-12-07 11:49:37.106163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.847 [2024-12-07 11:49:37.117066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.847 [2024-12-07 11:49:37.117090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.847 [2024-12-07 11:49:37.117099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.847 [2024-12-07 11:49:37.124714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.847 [2024-12-07 11:49:37.124738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.847 [2024-12-07 11:49:37.124747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.847 [2024-12-07 11:49:37.133510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.847 [2024-12-07 11:49:37.133534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.847 [2024-12-07 11:49:37.133544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.847 [2024-12-07 11:49:37.144033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.847 [2024-12-07 11:49:37.144057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.847 [2024-12-07 11:49:37.144066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.847 3475.00 IOPS, 434.38 MiB/s [2024-12-07T10:49:37.201Z] [2024-12-07 11:49:37.153671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.847 [2024-12-07 11:49:37.153694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.847 [2024-12-07 11:49:37.153703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.847 [2024-12-07 11:49:37.159003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.847 [2024-12-07 11:49:37.159031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.847 [2024-12-07 11:49:37.159050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:37.847 [2024-12-07 11:49:37.164379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.847 [2024-12-07 11:49:37.164402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.847 [2024-12-07 11:49:37.164411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:37.847 [2024-12-07 11:49:37.174682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.847 [2024-12-07 11:49:37.174706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.847 [2024-12-07 11:49:37.174714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:37.847 [2024-12-07 11:49:37.184503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.847 [2024-12-07 11:49:37.184527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.847 [2024-12-07 11:49:37.184536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:37.847 [2024-12-07 11:49:37.193224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:37.847 [2024-12-07 11:49:37.193251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.847 [2024-12-07 11:49:37.193260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.108 [2024-12-07 11:49:37.202563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.108 [2024-12-07 11:49:37.202588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.108 [2024-12-07 11:49:37.202596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.108 [2024-12-07 11:49:37.211353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.108 [2024-12-07 11:49:37.211376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.108 [2024-12-07 11:49:37.211385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.108 [2024-12-07 11:49:37.221178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.108 [2024-12-07 11:49:37.221202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.108 [2024-12-07 11:49:37.221210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.108 [2024-12-07 11:49:37.230880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.108 [2024-12-07 11:49:37.230903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.108 [2024-12-07 11:49:37.230912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.108 [2024-12-07 11:49:37.239424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.108 [2024-12-07 11:49:37.239452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.108 [2024-12-07 11:49:37.239461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.108 [2024-12-07 11:49:37.245999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.246028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.246037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.252049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.252073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.252082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.261708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.261732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.261741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.270492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.270516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.270525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.280178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.280203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.280211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.290847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.290871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.290880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.300349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.300373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.300382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.312478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.312501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.312514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.321938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.321962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.321970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.331505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.331530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.331538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.340400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.340423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.340433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.351202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.351226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.351235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.358896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.358919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.358928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.369582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.369606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.369615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.378856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.378880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.378889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.385620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.385643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.385652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.396773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.396800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.396809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.405741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.405765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.405774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.411779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.411802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.411811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.421133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.421157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.421166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.428798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.428822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.428831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.436714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.436738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.436747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.445590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.445614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.445623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.109 [2024-12-07 11:49:37.455383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.109 [2024-12-07 11:49:37.455407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.109 [2024-12-07 11:49:37.455416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.466161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.371 [2024-12-07 11:49:37.466185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.371 [2024-12-07 11:49:37.466197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.474929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.371 [2024-12-07 11:49:37.474953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.371 [2024-12-07 11:49:37.474962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.485567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.371 [2024-12-07 11:49:37.485590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.371 [2024-12-07 11:49:37.485599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.495331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.371 [2024-12-07 11:49:37.495353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.371 [2024-12-07 11:49:37.495362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.506808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.371 [2024-12-07 11:49:37.506832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.371 [2024-12-07 11:49:37.506841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.515532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.371 [2024-12-07 11:49:37.515554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.371 [2024-12-07 11:49:37.515563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.526909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.371 [2024-12-07 11:49:37.526934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.371 [2024-12-07 11:49:37.526942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.536235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.371 [2024-12-07 11:49:37.536259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.371 [2024-12-07 11:49:37.536268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.545883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.371 [2024-12-07 11:49:37.545907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.371 [2024-12-07 11:49:37.545916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.552596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.371 [2024-12-07 11:49:37.552638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.371 [2024-12-07 11:49:37.552646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.558551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.371 [2024-12-07 11:49:37.558575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.371 [2024-12-07 11:49:37.558584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.567607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.371 [2024-12-07 11:49:37.567631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.371 [2024-12-07 11:49:37.567640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.573347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.371 [2024-12-07 11:49:37.573371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.371 [2024-12-07 11:49:37.573379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.371 [2024-12-07 11:49:37.580783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.580807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.580816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.590310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.590333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.590343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.600182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.600207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.600216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.613080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.613103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.613113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.624778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.624801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.624810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.631574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.631598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.631606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.642126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.642150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.642159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.649847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.649872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.649881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.657246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.657271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.657280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.666993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.667022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.667032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.676429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.676453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.676461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.683210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.683233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.683242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.690505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.690529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.690538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.701319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.701346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.701355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.712080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.712104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.712113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.372 [2024-12-07 11:49:37.720141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.372 [2024-12-07 11:49:37.720164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.372 [2024-12-07 11:49:37.720173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.730926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.730951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.730960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.737941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.737965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.737974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.746477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.746500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.746509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.755956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.755980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.755990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.763530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.763554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.763563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.770492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.770516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.770525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.778332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.778356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.778365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.787557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.787581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.787590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.800265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.800289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.800298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.812065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.812089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.812097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.821184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.821208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.821217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.826729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.826753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.826762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.835313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.835339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.835348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.844747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.844771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.844780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.856539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.856567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.856576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.862239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.862263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.635 [2024-12-07 11:49:37.862272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.635 [2024-12-07 11:49:37.872832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.635 [2024-12-07 11:49:37.872856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.636 [2024-12-07 11:49:37.872865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.636 [2024-12-07 11:49:37.883058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.636 [2024-12-07 11:49:37.883082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.636 [2024-12-07 11:49:37.883091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.636 [2024-12-07 11:49:37.892070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.636 [2024-12-07 11:49:37.892094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.636 [2024-12-07 11:49:37.892102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.636 [2024-12-07 11:49:37.903585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.636 [2024-12-07 11:49:37.903609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.636 [2024-12-07 11:49:37.903618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.636 [2024-12-07 11:49:37.914337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.636 [2024-12-07 11:49:37.914361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.636 [2024-12-07 11:49:37.914370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.636 [2024-12-07 11:49:37.922501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.636 [2024-12-07 11:49:37.922526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.636 [2024-12-07 11:49:37.922534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.636 [2024-12-07 11:49:37.930937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.636 [2024-12-07 11:49:37.930962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.636 [2024-12-07 11:49:37.930972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.636 [2024-12-07 11:49:37.939746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.636 [2024-12-07 11:49:37.939771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.636 [2024-12-07 11:49:37.939780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.636 [2024-12-07 11:49:37.946951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.636 [2024-12-07 11:49:37.946976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.636 [2024-12-07 11:49:37.946985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.636 [2024-12-07 11:49:37.955005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.636 [2024-12-07 11:49:37.955033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.636 [2024-12-07 11:49:37.955042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.636 [2024-12-07 11:49:37.965593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.636 [2024-12-07 11:49:37.965615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.636 [2024-12-07 11:49:37.965624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.636 [2024-12-07 11:49:37.970262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.636 [2024-12-07 11:49:37.970285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.636 [2024-12-07 11:49:37.970294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.636 [2024-12-07 11:49:37.980872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.636 [2024-12-07 11:49:37.980896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.636 [2024-12-07 11:49:37.980905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.898 [2024-12-07 11:49:37.991615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.898 [2024-12-07 11:49:37.991640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.898 [2024-12-07 11:49:37.991649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.898 [2024-12-07 11:49:38.000128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.898 [2024-12-07 11:49:38.000152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.898 [2024-12-07 11:49:38.000161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.898 [2024-12-07 11:49:38.009112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.898 [2024-12-07 11:49:38.009136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.898 [2024-12-07 11:49:38.009148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.898 [2024-12-07 11:49:38.019534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.898 [2024-12-07 11:49:38.019556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.898 [2024-12-07 11:49:38.019565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.898 [2024-12-07 11:49:38.029615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.898 [2024-12-07 11:49:38.029638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.898 [2024-12-07 11:49:38.029647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.898 [2024-12-07 11:49:38.036409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.898 [2024-12-07 11:49:38.036432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.898 [2024-12-07 11:49:38.036441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.898 [2024-12-07 11:49:38.044019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.898 [2024-12-07 11:49:38.044042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.898 [2024-12-07 11:49:38.044051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.898 [2024-12-07 11:49:38.050492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.898 [2024-12-07 11:49:38.050517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.898 [2024-12-07 11:49:38.050526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.898 [2024-12-07 11:49:38.058082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.058105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.058114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.899 [2024-12-07 11:49:38.066752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.066777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.066786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.899 [2024-12-07 11:49:38.074919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.074944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.074953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.899 [2024-12-07 11:49:38.079032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.079056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.079065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.899 [2024-12-07 11:49:38.084540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.084563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.084573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.899 [2024-12-07 11:49:38.091882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.091907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.091916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.899 [2024-12-07 11:49:38.099448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.099473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.099482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.899 [2024-12-07 11:49:38.106918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.106941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.106950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.899 [2024-12-07 11:49:38.116398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.116422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.116431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.899 [2024-12-07 11:49:38.125586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.125610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.125619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.899 [2024-12-07 11:49:38.132646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.132669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.132678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:38.899 [2024-12-07 11:49:38.139855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.139878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.139890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:38.899 [2024-12-07 11:49:38.146686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.146710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.146719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:38.899 3507.00 IOPS, 438.38 MiB/s [2024-12-07T10:49:38.253Z] [2024-12-07 11:49:38.155484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039e700) 00:37:38.899 [2024-12-07 11:49:38.155508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.899 [2024-12-07 11:49:38.155517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:38.899 00:37:38.899 Latency(us) 00:37:38.899 [2024-12-07T10:49:38.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.899 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:38.899 nvme0n1 : 2.05 3437.50 429.69 0.00 0.00 4559.67 539.31 47841.28 00:37:38.899 [2024-12-07T10:49:38.253Z] =================================================================================================================== 00:37:38.899 [2024-12-07T10:49:38.253Z] Total : 3437.50 429.69 0.00 0.00 4559.67 539.31 47841.28 00:37:38.899 { 00:37:38.899 "results": [ 00:37:38.899 { 00:37:38.899 "job": "nvme0n1", 00:37:38.899 "core_mask": "0x2", 00:37:38.899 "workload": "randread", 00:37:38.899 "status": "finished", 00:37:38.899 "queue_depth": 16, 00:37:38.899 "io_size": 131072, 00:37:38.899 "runtime": 2.049161, 00:37:38.899 "iops": 3437.504422541713, 00:37:38.899 "mibps": 429.68805281771415, 00:37:38.899 "io_failed": 0, 00:37:38.899 "io_timeout": 0, 00:37:38.899 "avg_latency_us": 4559.666007950029, 00:37:38.899 "min_latency_us": 539.3066666666666, 00:37:38.899 "max_latency_us": 47841.28 00:37:38.899 } 00:37:38.899 ], 00:37:38.899 "core_count": 1 00:37:38.899 } 00:37:38.899 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:38.899 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:38.899 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:38.899 | .driver_specific 00:37:38.899 | .nvme_error 00:37:38.899 | .status_code 00:37:38.899 | .command_transient_transport_error' 00:37:38.899 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:39.161 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 228 > 0 )) 00:37:39.161 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2774763 00:37:39.161 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2774763 ']' 00:37:39.161 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2774763 00:37:39.161 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:39.161 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:39.161 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2774763 00:37:39.161 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:39.161 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:39.161 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2774763' 00:37:39.161 killing process with pid 2774763 00:37:39.161 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2774763 00:37:39.161 Received shutdown signal, test time was about 2.000000 seconds 00:37:39.161 00:37:39.161 Latency(us) 00:37:39.161 [2024-12-07T10:49:38.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:39.161 [2024-12-07T10:49:38.515Z] =================================================================================================================== 00:37:39.161 [2024-12-07T10:49:38.515Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:39.161 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2774763 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2775743 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2775743 /var/tmp/bperf.sock 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2775743 ']' 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:39.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:39.734 11:49:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:39.734 [2024-12-07 11:49:38.993487] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:39.734 [2024-12-07 11:49:38.993603] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775743 ] 00:37:39.995 [2024-12-07 11:49:39.127923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:39.995 [2024-12-07 11:49:39.203486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:40.566 11:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:40.566 11:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:40.566 11:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:40.566 11:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:40.827 11:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:40.827 11:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.827 11:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:40.827 11:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.827 11:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:40.827 11:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:41.089 nvme0n1 00:37:41.089 11:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:41.089 11:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.089 11:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:41.089 11:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.089 11:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:41.089 11:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:41.351 Running I/O for 2 seconds... 00:37:41.351 [2024-12-07 11:49:40.468151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:37:41.351 [2024-12-07 11:49:40.470157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.351 [2024-12-07 11:49:40.470192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:41.351 [2024-12-07 11:49:40.479599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:41.351 [2024-12-07 11:49:40.480860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.351 [2024-12-07 11:49:40.480886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:41.351 [2024-12-07 11:49:40.492739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:37:41.351 [2024-12-07 11:49:40.493977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.351 [2024-12-07 11:49:40.494001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:41.351 [2024-12-07 11:49:40.505093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:41.351 [2024-12-07 11:49:40.506301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.351 [2024-12-07 11:49:40.506324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:41.351 [2024-12-07 11:49:40.521486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:37:41.351 [2024-12-07 11:49:40.523604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.351 [2024-12-07 11:49:40.523627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:41.351 [2024-12-07 11:49:40.532104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:37:41.351 [2024-12-07 11:49:40.533506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.351 [2024-12-07 11:49:40.533528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:41.351 [2024-12-07 11:49:40.546068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:41.351 [2024-12-07 11:49:40.547463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.351 [2024-12-07 11:49:40.547485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:41.351 [2024-12-07 11:49:40.559226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:37:41.351 [2024-12-07 11:49:40.560602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.351 [2024-12-07 11:49:40.560625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:41.351 [2024-12-07 11:49:40.574140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:41.351 [2024-12-07 11:49:40.576250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.351 [2024-12-07 11:49:40.576272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:41.352 [2024-12-07 11:49:40.586040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:37:41.352 [2024-12-07 11:49:40.587600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.352 [2024-12-07 11:49:40.587622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.352 [2024-12-07 11:49:40.601089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:37:41.352 [2024-12-07 11:49:40.603359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.352 [2024-12-07 11:49:40.603382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:41.352 [2024-12-07 11:49:40.613003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:37:41.352 [2024-12-07 11:49:40.614753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.352 [2024-12-07 11:49:40.614775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:41.352 [2024-12-07 11:49:40.623783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:37:41.352 [2024-12-07 11:49:40.624810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.352 [2024-12-07 11:49:40.624831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.352 [2024-12-07 11:49:40.638628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:37:41.352 [2024-12-07 11:49:40.640362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.352 [2024-12-07 11:49:40.640384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:41.352 [2024-12-07 11:49:40.650510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173edd58 00:37:41.352 [2024-12-07 11:49:40.651709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.352 [2024-12-07 11:49:40.651730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.352 [2024-12-07 11:49:40.663843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:37:41.352 [2024-12-07 11:49:40.665039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.352 [2024-12-07 11:49:40.665061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:41.352 [2024-12-07 11:49:40.679024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:41.352 [2024-12-07 11:49:40.680890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.352 [2024-12-07 11:49:40.680912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:41.352 [2024-12-07 11:49:40.690025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:41.352 [2024-12-07 11:49:40.691392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.352 [2024-12-07 11:49:40.691413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:41.614 [2024-12-07 11:49:40.703128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:37:41.614 [2024-12-07 11:49:40.704490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.614 [2024-12-07 11:49:40.704511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.614 [2024-12-07 11:49:40.719446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:37:41.614 [2024-12-07 11:49:40.721692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.614 [2024-12-07 11:49:40.721713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:41.614 [2024-12-07 11:49:40.729965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:41.614 [2024-12-07 11:49:40.731499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.614 [2024-12-07 11:49:40.731521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:41.614 [2024-12-07 11:49:40.741823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb328 00:37:41.614 [2024-12-07 11:49:40.742822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.614 [2024-12-07 11:49:40.742843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:41.614 [2024-12-07 11:49:40.756862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:37:41.615 [2024-12-07 11:49:40.758581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.758607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.768731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:37:41.615 [2024-12-07 11:49:40.769902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.769924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.783699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea248 00:37:41.615 [2024-12-07 11:49:40.785577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.785598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.795572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e12d8 00:37:41.615 [2024-12-07 11:49:40.796886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.796907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.808959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:37:41.615 [2024-12-07 11:49:40.810293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.810314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.824048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:37:41.615 [2024-12-07 11:49:40.826086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.826107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.835051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f46d0 00:37:41.615 [2024-12-07 11:49:40.836565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.836587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.848985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:37:41.615 [2024-12-07 11:49:40.850506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.850527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.861189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:41.615 [2024-12-07 11:49:40.862685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.862707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.875128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:37:41.615 [2024-12-07 11:49:40.876630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.876651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.888263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:37:41.615 [2024-12-07 11:49:40.889749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.889771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.900439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:37:41.615 [2024-12-07 11:49:40.901865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.901888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.916180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8e88 00:37:41.615 [2024-12-07 11:49:40.918354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.918375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.928051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:37:41.615 [2024-12-07 11:49:40.929704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.929726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.942994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:37:41.615 [2024-12-07 11:49:40.945353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.945375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:41.615 [2024-12-07 11:49:40.954843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:37:41.615 [2024-12-07 11:49:40.956658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.615 [2024-12-07 11:49:40.956680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:40.965620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:37:41.877 [2024-12-07 11:49:40.966739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:40.966760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:40.978711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173edd58 00:37:41.877 [2024-12-07 11:49:40.979773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:40.979798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:40.991852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb8b8 00:37:41.877 [2024-12-07 11:49:40.992925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:40.992947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.006807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:41.877 [2024-12-07 11:49:41.008584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.008605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.018235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5a90 00:37:41.877 [2024-12-07 11:49:41.019300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.019322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.031325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1868 00:37:41.877 [2024-12-07 11:49:41.032364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.032386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.043474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:37:41.877 [2024-12-07 11:49:41.044457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.044479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.059193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd208 00:37:41.877 [2024-12-07 11:49:41.060939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.060960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.071054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:37:41.877 [2024-12-07 11:49:41.072223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.072244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.084355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2948 00:37:41.877 [2024-12-07 11:49:41.085551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.085573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.096595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:37:41.877 [2024-12-07 11:49:41.097781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.097803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.109692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2948 00:37:41.877 [2024-12-07 11:49:41.110875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.110896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.122764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:37:41.877 [2024-12-07 11:49:41.123931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.123952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.135830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:37:41.877 [2024-12-07 11:49:41.136968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.136989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.149760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2d80 00:37:41.877 [2024-12-07 11:49:41.150932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.150953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.164575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6fa8 00:37:41.877 [2024-12-07 11:49:41.166418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.166440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.176433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:37:41.877 [2024-12-07 11:49:41.177752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.177773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.191448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9f68 00:37:41.877 [2024-12-07 11:49:41.193487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.193508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.203329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:37:41.877 [2024-12-07 11:49:41.204824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.204846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:41.877 [2024-12-07 11:49:41.218352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:37:41.877 [2024-12-07 11:49:41.220567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.877 [2024-12-07 11:49:41.220588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:42.139 [2024-12-07 11:49:41.230229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:37:42.139 [2024-12-07 11:49:41.231913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.139 [2024-12-07 11:49:41.231934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:42.139 [2024-12-07 11:49:41.240951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:37:42.139 [2024-12-07 11:49:41.241876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.139 [2024-12-07 11:49:41.241898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:42.139 [2024-12-07 11:49:41.254029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:37:42.139 [2024-12-07 11:49:41.254986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.139 [2024-12-07 11:49:41.255008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:42.139 [2024-12-07 11:49:41.267136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7818 00:37:42.139 [2024-12-07 11:49:41.268095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.139 [2024-12-07 11:49:41.268116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:42.139 [2024-12-07 11:49:41.280224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:37:42.139 [2024-12-07 11:49:41.281162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.139 [2024-12-07 11:49:41.281183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:42.139 [2024-12-07 11:49:41.295053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:37:42.139 [2024-12-07 11:49:41.296696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.139 [2024-12-07 11:49:41.296717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:42.140 [2024-12-07 11:49:41.309002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:37:42.140 [2024-12-07 11:49:41.310662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.310684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:42.140 [2024-12-07 11:49:41.322061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:37:42.140 [2024-12-07 11:49:41.323692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.323717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:42.140 [2024-12-07 11:49:41.334336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:37:42.140 [2024-12-07 11:49:41.335930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.335952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:42.140 [2024-12-07 11:49:41.347405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:37:42.140 [2024-12-07 11:49:41.348959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.348980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:42.140 [2024-12-07 11:49:41.359279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:37:42.140 [2024-12-07 11:49:41.360352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.360374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:42.140 [2024-12-07 11:49:41.372583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:42.140 [2024-12-07 11:49:41.373635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.373657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:42.140 [2024-12-07 11:49:41.385651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:37:42.140 [2024-12-07 11:49:41.386666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.386687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:42.140 [2024-12-07 11:49:41.400391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:37:42.140 [2024-12-07 11:49:41.402135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.402156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:42.140 [2024-12-07 11:49:41.413482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8a50 00:37:42.140 [2024-12-07 11:49:41.415233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.415255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:42.140 [2024-12-07 11:49:41.425366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:42.140 [2024-12-07 11:49:41.426566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.426588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.140 [2024-12-07 11:49:41.440380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:42.140 [2024-12-07 11:49:41.442279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.442301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:42.140 19222.00 IOPS, 75.09 MiB/s [2024-12-07T10:49:41.494Z] [2024-12-07 11:49:41.452250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2948 00:37:42.140 [2024-12-07 11:49:41.453628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.453649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:42.140 [2024-12-07 11:49:41.467202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:37:42.140 [2024-12-07 11:49:41.469278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.469299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.140 [2024-12-07 11:49:41.477708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:37:42.140 [2024-12-07 11:49:41.479053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.140 [2024-12-07 11:49:41.479075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.402 [2024-12-07 11:49:41.494034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:42.402 [2024-12-07 11:49:41.496282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.402 [2024-12-07 11:49:41.496304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:42.402 [2024-12-07 11:49:41.507093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:37:42.402 [2024-12-07 11:49:41.509334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.402 [2024-12-07 11:49:41.509355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:42.402 [2024-12-07 11:49:41.518949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe2e8 00:37:42.402 [2024-12-07 11:49:41.520655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.402 [2024-12-07 11:49:41.520677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:42.402 [2024-12-07 11:49:41.529722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8a50 00:37:42.402 [2024-12-07 11:49:41.530677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.402 [2024-12-07 11:49:41.530699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:42.402 [2024-12-07 11:49:41.544556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f96f8 00:37:42.402 [2024-12-07 11:49:41.546242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.402 [2024-12-07 11:49:41.546267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:42.402 [2024-12-07 11:49:41.556414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:37:42.402 [2024-12-07 11:49:41.557550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.402 [2024-12-07 11:49:41.557572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:42.402 [2024-12-07 11:49:41.571448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:37:42.402 [2024-12-07 11:49:41.573337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.402 [2024-12-07 11:49:41.573358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:42.402 [2024-12-07 11:49:41.584636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:42.402 [2024-12-07 11:49:41.586498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.402 [2024-12-07 11:49:41.586520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:42.402 [2024-12-07 11:49:41.597706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:42.402 [2024-12-07 11:49:41.599563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.402 [2024-12-07 11:49:41.599584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:42.402 [2024-12-07 11:49:41.610834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:37:42.402 [2024-12-07 11:49:41.612691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.402 [2024-12-07 11:49:41.612713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:42.403 [2024-12-07 11:49:41.623950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:37:42.403 [2024-12-07 11:49:41.625782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.403 [2024-12-07 11:49:41.625804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:42.403 [2024-12-07 11:49:41.635830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:37:42.403 [2024-12-07 11:49:41.637137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.403 [2024-12-07 11:49:41.637159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:42.403 [2024-12-07 11:49:41.649154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1f80 00:37:42.403 [2024-12-07 11:49:41.650454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.403 [2024-12-07 11:49:41.650475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:42.403 [2024-12-07 11:49:41.662295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:37:42.403 [2024-12-07 11:49:41.663655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.403 [2024-12-07 11:49:41.663677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:42.403 [2024-12-07 11:49:41.675405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1f80 00:37:42.403 [2024-12-07 11:49:41.676720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.403 [2024-12-07 11:49:41.676742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:42.403 [2024-12-07 11:49:41.687720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:37:42.403 [2024-12-07 11:49:41.688999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.403 [2024-12-07 11:49:41.689026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:42.403 [2024-12-07 11:49:41.701711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:37:42.403 [2024-12-07 11:49:41.703101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.403 [2024-12-07 11:49:41.703123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:42.403 [2024-12-07 11:49:41.716615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fef90 00:37:42.403 [2024-12-07 11:49:41.718601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.403 [2024-12-07 11:49:41.718623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:42.403 [2024-12-07 11:49:41.729712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:37:42.403 [2024-12-07 11:49:41.731711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.403 [2024-12-07 11:49:41.731733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:42.403 [2024-12-07 11:49:41.741121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:37:42.403 [2024-12-07 11:49:41.742378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.403 [2024-12-07 11:49:41.742400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.753321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:37:42.666 [2024-12-07 11:49:41.754574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.754597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.768964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:37:42.666 [2024-12-07 11:49:41.770915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.770938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.780847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:37:42.666 [2024-12-07 11:49:41.782282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.782304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.794203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:37:42.666 [2024-12-07 11:49:41.795591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.795613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.807278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:42.666 [2024-12-07 11:49:41.808668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.808691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.820387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9f68 00:37:42.666 [2024-12-07 11:49:41.821782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.821803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.833560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:42.666 [2024-12-07 11:49:41.834944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.834965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.846671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9f68 00:37:42.666 [2024-12-07 11:49:41.848024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.848046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.859819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:42.666 [2024-12-07 11:49:41.861216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.861237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.872922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:42.666 [2024-12-07 11:49:41.874287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.874309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.886040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:42.666 [2024-12-07 11:49:41.887348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.887372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.900838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:42.666 [2024-12-07 11:49:41.902900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.902922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.913944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:37:42.666 [2024-12-07 11:49:41.915994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.916019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.924478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8e88 00:37:42.666 [2024-12-07 11:49:41.925792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.925814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.938498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6458 00:37:42.666 [2024-12-07 11:49:41.939806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.939828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.951597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8e88 00:37:42.666 [2024-12-07 11:49:41.952938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.666 [2024-12-07 11:49:41.952960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:42.666 [2024-12-07 11:49:41.964713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:37:42.667 [2024-12-07 11:49:41.966023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.667 [2024-12-07 11:49:41.966045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:42.667 [2024-12-07 11:49:41.977852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173efae0 00:37:42.667 [2024-12-07 11:49:41.979124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.667 [2024-12-07 11:49:41.979145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:42.667 [2024-12-07 11:49:41.992698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:37:42.667 [2024-12-07 11:49:41.994722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.667 [2024-12-07 11:49:41.994744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:42.667 [2024-12-07 11:49:42.005955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4de8 00:37:42.667 [2024-12-07 11:49:42.007963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.667 [2024-12-07 11:49:42.007985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.019049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:37:42.929 [2024-12-07 11:49:42.021037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.021059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.029576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:37:42.929 [2024-12-07 11:49:42.030814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.030836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.042663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4b08 00:37:42.929 [2024-12-07 11:49:42.043895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.043917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.059002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:37:42.929 [2024-12-07 11:49:42.061147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.061170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.072092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:37:42.929 [2024-12-07 11:49:42.074215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.074237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.085174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:42.929 [2024-12-07 11:49:42.087290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.087311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.097029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:42.929 [2024-12-07 11:49:42.098606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.098627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.110384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fef90 00:37:42.929 [2024-12-07 11:49:42.111961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.111987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.125120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5ec8 00:37:42.929 [2024-12-07 11:49:42.127413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.127435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.136511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6738 00:37:42.929 [2024-12-07 11:49:42.138077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.138099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.149609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:37:42.929 [2024-12-07 11:49:42.151131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.151153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.162730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:37:42.929 [2024-12-07 11:49:42.164285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.164307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.177576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e12d8 00:37:42.929 [2024-12-07 11:49:42.179816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.179838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.189024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:37:42.929 [2024-12-07 11:49:42.190546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.190568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.203856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:37:42.929 [2024-12-07 11:49:42.206078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.206099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.215725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb8b8 00:37:42.929 [2024-12-07 11:49:42.217432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.217454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.226509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5378 00:37:42.929 [2024-12-07 11:49:42.227532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.227553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.239621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:42.929 [2024-12-07 11:49:42.240624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.240645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.254462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:37:42.929 [2024-12-07 11:49:42.256143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.256165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.267544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:37:42.929 [2024-12-07 11:49:42.269249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.929 [2024-12-07 11:49:42.269270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:42.929 [2024-12-07 11:49:42.278892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:37:43.191 [2024-12-07 11:49:42.279862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.191 [2024-12-07 11:49:42.279885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:43.191 [2024-12-07 11:49:42.292024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:37:43.191 [2024-12-07 11:49:42.292945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.191 [2024-12-07 11:49:42.292966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:43.191 [2024-12-07 11:49:42.305127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:37:43.191 [2024-12-07 11:49:42.306062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.191 [2024-12-07 11:49:42.306084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:43.191 [2024-12-07 11:49:42.322442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:37:43.191 [2024-12-07 11:49:42.324784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.191 [2024-12-07 11:49:42.324804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:43.191 [2024-12-07 11:49:42.333054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:37:43.191 [2024-12-07 11:49:42.334649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.191 [2024-12-07 11:49:42.334671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:43.191 [2024-12-07 11:49:42.347068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:37:43.191 [2024-12-07 11:49:42.348727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.191 [2024-12-07 11:49:42.348749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:43.192 [2024-12-07 11:49:42.360117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:37:43.192 [2024-12-07 11:49:42.361764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.192 [2024-12-07 11:49:42.361787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.192 [2024-12-07 11:49:42.372391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:37:43.192 [2024-12-07 11:49:42.373999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.192 [2024-12-07 11:49:42.374026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:43.192 [2024-12-07 11:49:42.385483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:43.192 [2024-12-07 11:49:42.387077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.192 [2024-12-07 11:49:42.387099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:43.192 [2024-12-07 11:49:42.397351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:37:43.192 [2024-12-07 11:49:42.398406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.192 [2024-12-07 11:49:42.398428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:43.192 [2024-12-07 11:49:42.412410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:37:43.192 [2024-12-07 11:49:42.414184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.192 [2024-12-07 11:49:42.414205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:43.192 [2024-12-07 11:49:42.425494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4b08 00:37:43.192 [2024-12-07 11:49:42.427244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.192 [2024-12-07 11:49:42.427265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:43.192 [2024-12-07 11:49:42.438567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:37:43.192 [2024-12-07 11:49:42.440314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.192 [2024-12-07 11:49:42.440335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:43.192 19349.50 IOPS, 75.58 MiB/s [2024-12-07T10:49:42.546Z] [2024-12-07 11:49:42.451630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:37:43.192 [2024-12-07 11:49:42.453368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.192 [2024-12-07 11:49:42.453389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:43.192 00:37:43.192 Latency(us) 00:37:43.192 [2024-12-07T10:49:42.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.192 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:43.192 nvme0n1 : 2.01 19355.27 75.61 0.00 0.00 6606.82 2512.21 17367.04 00:37:43.192 [2024-12-07T10:49:42.546Z] =================================================================================================================== 00:37:43.192 [2024-12-07T10:49:42.546Z] Total : 19355.27 75.61 0.00 0.00 6606.82 2512.21 17367.04 00:37:43.192 { 00:37:43.192 "results": [ 00:37:43.192 { 00:37:43.192 "job": "nvme0n1", 00:37:43.192 "core_mask": "0x2", 00:37:43.192 "workload": "randwrite", 00:37:43.192 "status": "finished", 00:37:43.192 "queue_depth": 128, 00:37:43.192 "io_size": 4096, 00:37:43.192 "runtime": 2.006017, 00:37:43.192 "iops": 19355.26967119421, 00:37:43.192 "mibps": 75.60652215310239, 00:37:43.192 "io_failed": 0, 00:37:43.192 "io_timeout": 0, 00:37:43.192 "avg_latency_us": 6606.823279333111, 00:37:43.192 "min_latency_us": 2512.213333333333, 00:37:43.192 "max_latency_us": 17367.04 00:37:43.192 } 00:37:43.192 ], 00:37:43.192 "core_count": 1 00:37:43.192 } 00:37:43.192 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:43.192 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:43.192 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:43.192 | .driver_specific 00:37:43.192 | .nvme_error 00:37:43.192 | .status_code 00:37:43.192 | .command_transient_transport_error' 00:37:43.192 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:43.453 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:37:43.453 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2775743 00:37:43.453 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2775743 ']' 00:37:43.453 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2775743 00:37:43.453 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:43.453 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:43.453 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2775743 00:37:43.453 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:43.453 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:43.453 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2775743' 00:37:43.453 killing process with pid 2775743 00:37:43.453 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2775743 00:37:43.453 Received shutdown signal, test time was about 2.000000 seconds 00:37:43.453 00:37:43.453 Latency(us) 00:37:43.453 [2024-12-07T10:49:42.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.453 [2024-12-07T10:49:42.807Z] =================================================================================================================== 00:37:43.453 [2024-12-07T10:49:42.807Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:43.453 11:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2775743 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2776430 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2776430 /var/tmp/bperf.sock 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2776430 ']' 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:44.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:44.025 11:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:44.025 [2024-12-07 11:49:43.251115] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:44.025 [2024-12-07 11:49:43.251222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2776430 ] 00:37:44.025 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:44.025 Zero copy mechanism will not be used. 00:37:44.287 [2024-12-07 11:49:43.379210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.287 [2024-12-07 11:49:43.453642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:44.861 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:44.861 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:44.861 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:44.861 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:44.861 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:44.861 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.861 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:44.861 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.861 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:44.861 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:45.123 nvme0n1 00:37:45.123 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:45.123 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.123 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:45.123 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.123 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:45.123 11:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:45.385 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:45.385 Zero copy mechanism will not be used. 00:37:45.385 Running I/O for 2 seconds... 00:37:45.385 [2024-12-07 11:49:44.543177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.385 [2024-12-07 11:49:44.543412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.385 [2024-12-07 11:49:44.543447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.385 [2024-12-07 11:49:44.553986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.385 [2024-12-07 11:49:44.554295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.385 [2024-12-07 11:49:44.554324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.385 [2024-12-07 11:49:44.565468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.385 [2024-12-07 11:49:44.565742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.385 [2024-12-07 11:49:44.565767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.385 [2024-12-07 11:49:44.576857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.385 [2024-12-07 11:49:44.577156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.385 [2024-12-07 11:49:44.577179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.385 [2024-12-07 11:49:44.586533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.385 [2024-12-07 11:49:44.586802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.385 [2024-12-07 11:49:44.586826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.385 [2024-12-07 11:49:44.597848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.385 [2024-12-07 11:49:44.598086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.385 [2024-12-07 11:49:44.598107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.385 [2024-12-07 11:49:44.607377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.385 [2024-12-07 11:49:44.607645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.385 [2024-12-07 11:49:44.607667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.385 [2024-12-07 11:49:44.617009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.385 [2024-12-07 11:49:44.617242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.385 [2024-12-07 11:49:44.617262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.385 [2024-12-07 11:49:44.627095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.386 [2024-12-07 11:49:44.627312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.386 [2024-12-07 11:49:44.627332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.386 [2024-12-07 11:49:44.636511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.386 [2024-12-07 11:49:44.636716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.386 [2024-12-07 11:49:44.636737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.386 [2024-12-07 11:49:44.646333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.386 [2024-12-07 11:49:44.646612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.386 [2024-12-07 11:49:44.646633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.386 [2024-12-07 11:49:44.656004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.386 [2024-12-07 11:49:44.656267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.386 [2024-12-07 11:49:44.656289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.386 [2024-12-07 11:49:44.665530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.386 [2024-12-07 11:49:44.665803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.386 [2024-12-07 11:49:44.665825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.386 [2024-12-07 11:49:44.675883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.386 [2024-12-07 11:49:44.676150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.386 [2024-12-07 11:49:44.676171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.386 [2024-12-07 11:49:44.684961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.386 [2024-12-07 11:49:44.685035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.386 [2024-12-07 11:49:44.685056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.386 [2024-12-07 11:49:44.694742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.386 [2024-12-07 11:49:44.695030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.386 [2024-12-07 11:49:44.695055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.386 [2024-12-07 11:49:44.705046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.386 [2024-12-07 11:49:44.705292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.386 [2024-12-07 11:49:44.705314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.386 [2024-12-07 11:49:44.713956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.386 [2024-12-07 11:49:44.714207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.386 [2024-12-07 11:49:44.714229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.386 [2024-12-07 11:49:44.723503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.386 [2024-12-07 11:49:44.723700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.386 [2024-12-07 11:49:44.723721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.386 [2024-12-07 11:49:44.733756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.386 [2024-12-07 11:49:44.734004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.386 [2024-12-07 11:49:44.734032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.745042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.745315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.745336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.756832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.757060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.757081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.768777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.769041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.769064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.780281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.780543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.780565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.792181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.792405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.792426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.804232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.804462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.804483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.815879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.816149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.816170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.827815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.828091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.828113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.837021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.837160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.837181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.844206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.844441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.844464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.854396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.854759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.854781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.863282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.863652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.863674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.873060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.873497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.873524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.883254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.883602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.883624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.893418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.893781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.893804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.904419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.904790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.904813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.912201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.912425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.912447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.922958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.923317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.923340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.934030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.934373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.934395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.945396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.945760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.945782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.956335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.956700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.956723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.965612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.965965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.965986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.972899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.648 [2024-12-07 11:49:44.973130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.648 [2024-12-07 11:49:44.973152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.648 [2024-12-07 11:49:44.980850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.649 [2024-12-07 11:49:44.981082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.649 [2024-12-07 11:49:44.981104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.649 [2024-12-07 11:49:44.990070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.649 [2024-12-07 11:49:44.990325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.649 [2024-12-07 11:49:44.990348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.649 [2024-12-07 11:49:44.998332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.913 [2024-12-07 11:49:44.998559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.913 [2024-12-07 11:49:44.998581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.913 [2024-12-07 11:49:45.006401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.913 [2024-12-07 11:49:45.006773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.913 [2024-12-07 11:49:45.006795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.913 [2024-12-07 11:49:45.014195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.913 [2024-12-07 11:49:45.014525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.913 [2024-12-07 11:49:45.014548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.913 [2024-12-07 11:49:45.024049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.913 [2024-12-07 11:49:45.024427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.913 [2024-12-07 11:49:45.024449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.913 [2024-12-07 11:49:45.032450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.913 [2024-12-07 11:49:45.032676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.913 [2024-12-07 11:49:45.032701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.913 [2024-12-07 11:49:45.042346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.913 [2024-12-07 11:49:45.042701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.913 [2024-12-07 11:49:45.042723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.050522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.050590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.050610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.061947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.062193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.062214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.073683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.074043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.074066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.085389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.085755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.085778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.097223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.097550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.097572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.108559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.108880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.108902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.119702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.120059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.120081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.129495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.129848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.129871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.140524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.140888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.140911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.151903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.152243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.152266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.163520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.163862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.163885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.175153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.175531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.175553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.186407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.186776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.186799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.197516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.197880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.197902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.209293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.209647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.914 [2024-12-07 11:49:45.209670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:45.914 [2024-12-07 11:49:45.220509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.914 [2024-12-07 11:49:45.220872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.915 [2024-12-07 11:49:45.220895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:45.915 [2024-12-07 11:49:45.231816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.915 [2024-12-07 11:49:45.232176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.915 [2024-12-07 11:49:45.232199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:45.915 [2024-12-07 11:49:45.243403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.915 [2024-12-07 11:49:45.243776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.915 [2024-12-07 11:49:45.243799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:45.915 [2024-12-07 11:49:45.255028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:45.915 [2024-12-07 11:49:45.255377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.915 [2024-12-07 11:49:45.255400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.175 [2024-12-07 11:49:45.266739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.175 [2024-12-07 11:49:45.267089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.175 [2024-12-07 11:49:45.267112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.175 [2024-12-07 11:49:45.278425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.175 [2024-12-07 11:49:45.278791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.175 [2024-12-07 11:49:45.278813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.175 [2024-12-07 11:49:45.289837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.175 [2024-12-07 11:49:45.290171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.175 [2024-12-07 11:49:45.290194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.175 [2024-12-07 11:49:45.300778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.175 [2024-12-07 11:49:45.301123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.175 [2024-12-07 11:49:45.301146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.175 [2024-12-07 11:49:45.311914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.175 [2024-12-07 11:49:45.312264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.175 [2024-12-07 11:49:45.312287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.175 [2024-12-07 11:49:45.323957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.175 [2024-12-07 11:49:45.324288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.175 [2024-12-07 11:49:45.324314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.175 [2024-12-07 11:49:45.335782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.175 [2024-12-07 11:49:45.336185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.175 [2024-12-07 11:49:45.336207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.175 [2024-12-07 11:49:45.347821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.175 [2024-12-07 11:49:45.348202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.175 [2024-12-07 11:49:45.348225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.175 [2024-12-07 11:49:45.359864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.175 [2024-12-07 11:49:45.360199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.175 [2024-12-07 11:49:45.360221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.175 [2024-12-07 11:49:45.371599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.175 [2024-12-07 11:49:45.371918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.175 [2024-12-07 11:49:45.371941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.175 [2024-12-07 11:49:45.383236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.176 [2024-12-07 11:49:45.383579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.176 [2024-12-07 11:49:45.383602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.176 [2024-12-07 11:49:45.395776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.176 [2024-12-07 11:49:45.396151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.176 [2024-12-07 11:49:45.396174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.176 [2024-12-07 11:49:45.407409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.176 [2024-12-07 11:49:45.407765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.176 [2024-12-07 11:49:45.407788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.176 [2024-12-07 11:49:45.419170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.176 [2024-12-07 11:49:45.419500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.176 [2024-12-07 11:49:45.419523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.176 [2024-12-07 11:49:45.430678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.176 [2024-12-07 11:49:45.431041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.176 [2024-12-07 11:49:45.431064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.176 [2024-12-07 11:49:45.441951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.176 [2024-12-07 11:49:45.442238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.176 [2024-12-07 11:49:45.442260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.176 [2024-12-07 11:49:45.454350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.176 [2024-12-07 11:49:45.454576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.176 [2024-12-07 11:49:45.454598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.176 [2024-12-07 11:49:45.465974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.176 [2024-12-07 11:49:45.466339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.176 [2024-12-07 11:49:45.466361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.176 [2024-12-07 11:49:45.477752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.176 [2024-12-07 11:49:45.478124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.176 [2024-12-07 11:49:45.478146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.176 [2024-12-07 11:49:45.488271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.176 [2024-12-07 11:49:45.488603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.176 [2024-12-07 11:49:45.488625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.176 [2024-12-07 11:49:45.500110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.176 [2024-12-07 11:49:45.500352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.176 [2024-12-07 11:49:45.500374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.176 [2024-12-07 11:49:45.511973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.176 [2024-12-07 11:49:45.512334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.176 [2024-12-07 11:49:45.512356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.176 [2024-12-07 11:49:45.522093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.176 [2024-12-07 11:49:45.522453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.176 [2024-12-07 11:49:45.522480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.533194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.533554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.533576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.437 2901.00 IOPS, 362.62 MiB/s [2024-12-07T10:49:45.791Z] [2024-12-07 11:49:45.544604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.544877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.544898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.554674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.554996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.555026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.562556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.562889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.562912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.570087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.570454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.570476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.580602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.580962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.580985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.589977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.590209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.590230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.600147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.600353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.600373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.608293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.608546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.608569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.616742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.616974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.616996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.624843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.625180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.625202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.632053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.632388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.632410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.639094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.639287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.639309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.648379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.648720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.648743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.657871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.658218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.658240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.662395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.662577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.662599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.666436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.666619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.666644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.670183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.670367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.670389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.674757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.674939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.674960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.680112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.680299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.680321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.437 [2024-12-07 11:49:45.684902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.437 [2024-12-07 11:49:45.685209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.437 [2024-12-07 11:49:45.685232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.692800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.692989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.693017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.700654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.701023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.701045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.708608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.708873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.708895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.716959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.717288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.717310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.724414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.724654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.724676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.730296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.730479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.730500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.739433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.739646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.739667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.745459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.745719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.745740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.754102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.754344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.754365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.760252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.760438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.760459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.764888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.765077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.765099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.768768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.768952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.768973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.775936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.776227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.776250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.438 [2024-12-07 11:49:45.783515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.438 [2024-12-07 11:49:45.783723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.438 [2024-12-07 11:49:45.783744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.790272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.790626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.790649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.800430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.800702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.800725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.810471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.810796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.810818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.821124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.821483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.821505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.831512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.831714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.831735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.842302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.842511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.842532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.852685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.852916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.852937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.862736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.863018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.863046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.873113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.873344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.873367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.884109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.884309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.884330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.894650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.894969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.894991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.904959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.905184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.905206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.915242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.915492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.915514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.925032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.925327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.925349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.935746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.935965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.935985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.946081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.946370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.698 [2024-12-07 11:49:45.946393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.698 [2024-12-07 11:49:45.956231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.698 [2024-12-07 11:49:45.956426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.699 [2024-12-07 11:49:45.956447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.699 [2024-12-07 11:49:45.966596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.699 [2024-12-07 11:49:45.966863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.699 [2024-12-07 11:49:45.966885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.699 [2024-12-07 11:49:45.977073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.699 [2024-12-07 11:49:45.977292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.699 [2024-12-07 11:49:45.977313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.699 [2024-12-07 11:49:45.987334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.699 [2024-12-07 11:49:45.987579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.699 [2024-12-07 11:49:45.987602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.699 [2024-12-07 11:49:45.997453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.699 [2024-12-07 11:49:45.997721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.699 [2024-12-07 11:49:45.997743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.699 [2024-12-07 11:49:46.007752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.699 [2024-12-07 11:49:46.008183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.699 [2024-12-07 11:49:46.008206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.699 [2024-12-07 11:49:46.018245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.699 [2024-12-07 11:49:46.018482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.699 [2024-12-07 11:49:46.018503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.699 [2024-12-07 11:49:46.028397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.699 [2024-12-07 11:49:46.028641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.699 [2024-12-07 11:49:46.028662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.699 [2024-12-07 11:49:46.038915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.699 [2024-12-07 11:49:46.039188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.699 [2024-12-07 11:49:46.039214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.699 [2024-12-07 11:49:46.049007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.959 [2024-12-07 11:49:46.049296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.959 [2024-12-07 11:49:46.049320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.959 [2024-12-07 11:49:46.058369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.959 [2024-12-07 11:49:46.058756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.959 [2024-12-07 11:49:46.058779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.959 [2024-12-07 11:49:46.067421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.959 [2024-12-07 11:49:46.067602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.959 [2024-12-07 11:49:46.067623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.959 [2024-12-07 11:49:46.076416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.959 [2024-12-07 11:49:46.076695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.959 [2024-12-07 11:49:46.076716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.959 [2024-12-07 11:49:46.085374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.959 [2024-12-07 11:49:46.085667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.959 [2024-12-07 11:49:46.085689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.959 [2024-12-07 11:49:46.093656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.959 [2024-12-07 11:49:46.094002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.959 [2024-12-07 11:49:46.094028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.959 [2024-12-07 11:49:46.102452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.959 [2024-12-07 11:49:46.102652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.959 [2024-12-07 11:49:46.102674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.959 [2024-12-07 11:49:46.107513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.959 [2024-12-07 11:49:46.107836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.959 [2024-12-07 11:49:46.107859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.959 [2024-12-07 11:49:46.116764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.959 [2024-12-07 11:49:46.116950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.959 [2024-12-07 11:49:46.116971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.959 [2024-12-07 11:49:46.125440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.959 [2024-12-07 11:49:46.125700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.959 [2024-12-07 11:49:46.125722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.959 [2024-12-07 11:49:46.133176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.959 [2024-12-07 11:49:46.133472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.959 [2024-12-07 11:49:46.133494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.959 [2024-12-07 11:49:46.140376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.959 [2024-12-07 11:49:46.140705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.959 [2024-12-07 11:49:46.140728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.959 [2024-12-07 11:49:46.148339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.959 [2024-12-07 11:49:46.148644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.148667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.158944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.159276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.159298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.167500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.167814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.167836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.175316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.175626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.175648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.182903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.183069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.183093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.189713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.189877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.189898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.196868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.197141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.197163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.205094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.205455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.205478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.211582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.211744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.211766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.217155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.217469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.217491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.224971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.225234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.225256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.232906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.233147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.233168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.242680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.242978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.243000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.252524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.252818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.252841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.262639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.262898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.262920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.272324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.272564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.272587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.282028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.282304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.282324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.292363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.292647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.292668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:46.960 [2024-12-07 11:49:46.302664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:46.960 [2024-12-07 11:49:46.302915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.960 [2024-12-07 11:49:46.302935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:47.220 [2024-12-07 11:49:46.312898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.220 [2024-12-07 11:49:46.313159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.220 [2024-12-07 11:49:46.313180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:47.220 [2024-12-07 11:49:46.322486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.220 [2024-12-07 11:49:46.322552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.220 [2024-12-07 11:49:46.322572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:47.220 [2024-12-07 11:49:46.330781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.220 [2024-12-07 11:49:46.331006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.220 [2024-12-07 11:49:46.331032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:47.220 [2024-12-07 11:49:46.339317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.220 [2024-12-07 11:49:46.339417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.220 [2024-12-07 11:49:46.339439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:47.220 [2024-12-07 11:49:46.347773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.220 [2024-12-07 11:49:46.348021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.220 [2024-12-07 11:49:46.348042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:47.220 [2024-12-07 11:49:46.356721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.220 [2024-12-07 11:49:46.356809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.220 [2024-12-07 11:49:46.356830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:47.220 [2024-12-07 11:49:46.364738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.220 [2024-12-07 11:49:46.364824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.220 [2024-12-07 11:49:46.364845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:47.220 [2024-12-07 11:49:46.372724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.220 [2024-12-07 11:49:46.372859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.220 [2024-12-07 11:49:46.372879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.382141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.382350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.382371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.390444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.390514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.390534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.398424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.398517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.398537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.405597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.405869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.405894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.413981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.414228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.414250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.423982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.424189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.424210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.433078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.433328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.433348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.442337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.442472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.442493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.452510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.452740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.452761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.462634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.462835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.462856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.473802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.474025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.474046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.484295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.484556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.484578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.494560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.494800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.494822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.504913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.505134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.505155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.515963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.516201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.516221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.526101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.526374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.526397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:47.221 [2024-12-07 11:49:46.536449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.536658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.536678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:47.221 3232.00 IOPS, 404.00 MiB/s [2024-12-07T10:49:46.575Z] [2024-12-07 11:49:46.547040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:47.221 [2024-12-07 11:49:46.547136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.221 [2024-12-07 11:49:46.547156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:47.221 00:37:47.221 Latency(us) 00:37:47.221 [2024-12-07T10:49:46.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.221 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:47.221 nvme0n1 : 2.01 3230.24 403.78 0.00 0.00 4942.62 1802.24 12451.84 00:37:47.221 [2024-12-07T10:49:46.575Z] =================================================================================================================== 00:37:47.221 [2024-12-07T10:49:46.575Z] Total : 3230.24 403.78 0.00 0.00 4942.62 1802.24 12451.84 00:37:47.221 { 00:37:47.221 "results": [ 00:37:47.221 { 00:37:47.221 "job": "nvme0n1", 00:37:47.221 "core_mask": "0x2", 00:37:47.221 "workload": "randwrite", 00:37:47.221 "status": "finished", 00:37:47.221 "queue_depth": 16, 00:37:47.221 "io_size": 131072, 00:37:47.221 "runtime": 2.00604, 00:37:47.221 "iops": 3230.2446611234072, 00:37:47.221 "mibps": 403.7805826404259, 00:37:47.221 "io_failed": 0, 00:37:47.221 "io_timeout": 0, 00:37:47.221 "avg_latency_us": 4942.61833744856, 00:37:47.221 "min_latency_us": 1802.24, 00:37:47.221 "max_latency_us": 12451.84 00:37:47.221 } 00:37:47.221 ], 00:37:47.221 "core_count": 1 00:37:47.221 } 00:37:47.481 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:47.481 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:47.481 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:47.481 | .driver_specific 00:37:47.481 | .nvme_error 00:37:47.481 | .status_code 00:37:47.481 | .command_transient_transport_error' 00:37:47.481 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:47.481 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:37:47.481 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2776430 00:37:47.481 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2776430 ']' 00:37:47.481 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2776430 00:37:47.481 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:47.481 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:47.481 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2776430 00:37:47.741 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:47.741 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:47.741 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2776430' 00:37:47.741 killing process with pid 2776430 00:37:47.741 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2776430 00:37:47.741 Received shutdown signal, test time was about 2.000000 seconds 00:37:47.741 00:37:47.741 Latency(us) 00:37:47.741 [2024-12-07T10:49:47.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.741 [2024-12-07T10:49:47.095Z] =================================================================================================================== 00:37:47.741 [2024-12-07T10:49:47.095Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:47.741 11:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2776430 00:37:48.001 11:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2773790 00:37:48.001 11:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2773790 ']' 00:37:48.001 11:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2773790 00:37:48.001 11:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:48.001 11:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:48.001 11:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2773790 00:37:48.261 11:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:48.261 11:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:48.261 11:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2773790' 00:37:48.261 killing process with pid 2773790 00:37:48.261 11:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2773790 00:37:48.261 11:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2773790 00:37:48.832 00:37:48.832 real 0m18.642s 00:37:48.832 user 0m35.784s 00:37:48.832 sys 0m3.805s 00:37:48.832 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:48.832 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:48.832 ************************************ 00:37:48.832 END TEST nvmf_digest_error 00:37:48.833 ************************************ 00:37:48.833 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:48.833 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:48.833 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:48.833 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:48.833 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:48.833 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:48.833 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:48.833 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:49.095 rmmod nvme_tcp 00:37:49.095 rmmod nvme_fabrics 00:37:49.095 rmmod nvme_keyring 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2773790 ']' 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2773790 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2773790 ']' 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2773790 00:37:49.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2773790) - No such process 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2773790 is not found' 00:37:49.095 Process with pid 2773790 is not found 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:49.095 11:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.030 11:49:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:51.030 00:37:51.030 real 0m47.807s 00:37:51.030 user 1m15.034s 00:37:51.030 sys 0m13.314s 00:37:51.030 11:49:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.030 11:49:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:51.030 ************************************ 00:37:51.030 END TEST nvmf_digest 00:37:51.030 ************************************ 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.292 ************************************ 00:37:51.292 START TEST nvmf_bdevperf 00:37:51.292 ************************************ 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:51.292 * Looking for test storage... 00:37:51.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:51.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.292 --rc genhtml_branch_coverage=1 00:37:51.292 --rc genhtml_function_coverage=1 00:37:51.292 --rc genhtml_legend=1 00:37:51.292 --rc geninfo_all_blocks=1 00:37:51.292 --rc geninfo_unexecuted_blocks=1 00:37:51.292 00:37:51.292 ' 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:51.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.292 --rc genhtml_branch_coverage=1 00:37:51.292 --rc genhtml_function_coverage=1 00:37:51.292 --rc genhtml_legend=1 00:37:51.292 --rc geninfo_all_blocks=1 00:37:51.292 --rc geninfo_unexecuted_blocks=1 00:37:51.292 00:37:51.292 ' 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:51.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.292 --rc genhtml_branch_coverage=1 00:37:51.292 --rc genhtml_function_coverage=1 00:37:51.292 --rc genhtml_legend=1 00:37:51.292 --rc geninfo_all_blocks=1 00:37:51.292 --rc geninfo_unexecuted_blocks=1 00:37:51.292 00:37:51.292 ' 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:51.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.292 --rc genhtml_branch_coverage=1 00:37:51.292 --rc genhtml_function_coverage=1 00:37:51.292 --rc genhtml_legend=1 00:37:51.292 --rc geninfo_all_blocks=1 00:37:51.292 --rc geninfo_unexecuted_blocks=1 00:37:51.292 00:37:51.292 ' 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.292 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:51.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:51.293 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.554 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:51.554 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:51.554 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:51.554 11:49:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:59.695 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:59.695 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:59.695 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:59.696 Found net devices under 0000:31:00.0: cvl_0_0 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:59.696 Found net devices under 0000:31:00.1: cvl_0_1 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:59.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:59.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:37:59.696 00:37:59.696 --- 10.0.0.2 ping statistics --- 00:37:59.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:59.696 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:59.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:59.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:37:59.696 00:37:59.696 --- 10.0.0.1 ping statistics --- 00:37:59.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:59.696 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2781522 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2781522 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:59.696 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2781522 ']' 00:37:59.697 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:59.697 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:59.697 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:59.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:59.697 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:59.697 11:49:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.697 [2024-12-07 11:49:57.959597] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:59.697 [2024-12-07 11:49:57.959701] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:59.697 [2024-12-07 11:49:58.117976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:59.697 [2024-12-07 11:49:58.245109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:59.697 [2024-12-07 11:49:58.245180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:59.697 [2024-12-07 11:49:58.245195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:59.697 [2024-12-07 11:49:58.245208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:59.697 [2024-12-07 11:49:58.245219] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:59.697 [2024-12-07 11:49:58.248224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:59.697 [2024-12-07 11:49:58.248354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.697 [2024-12-07 11:49:58.248380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.697 [2024-12-07 11:49:58.779422] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.697 Malloc0 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:59.697 [2024-12-07 11:49:58.886160] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:59.697 { 00:37:59.697 "params": { 00:37:59.697 "name": "Nvme$subsystem", 00:37:59.697 "trtype": "$TEST_TRANSPORT", 00:37:59.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:59.697 "adrfam": "ipv4", 00:37:59.697 "trsvcid": "$NVMF_PORT", 00:37:59.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:59.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:59.697 "hdgst": ${hdgst:-false}, 00:37:59.697 "ddgst": ${ddgst:-false} 00:37:59.697 }, 00:37:59.697 "method": "bdev_nvme_attach_controller" 00:37:59.697 } 00:37:59.697 EOF 00:37:59.697 )") 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:59.697 11:49:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:59.697 "params": { 00:37:59.697 "name": "Nvme1", 00:37:59.697 "trtype": "tcp", 00:37:59.697 "traddr": "10.0.0.2", 00:37:59.697 "adrfam": "ipv4", 00:37:59.697 "trsvcid": "4420", 00:37:59.697 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:59.697 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:59.697 "hdgst": false, 00:37:59.697 "ddgst": false 00:37:59.697 }, 00:37:59.697 "method": "bdev_nvme_attach_controller" 00:37:59.697 }' 00:37:59.697 [2024-12-07 11:49:58.980905] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:37:59.697 [2024-12-07 11:49:58.981008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781873 ] 00:37:59.959 [2024-12-07 11:49:59.113652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:59.959 [2024-12-07 11:49:59.211352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.531 Running I/O for 1 seconds... 00:38:01.476 7993.00 IOPS, 31.22 MiB/s 00:38:01.476 Latency(us) 00:38:01.476 [2024-12-07T10:50:00.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.476 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:01.476 Verification LBA range: start 0x0 length 0x4000 00:38:01.476 Nvme1n1 : 1.00 8080.79 31.57 0.00 0.00 15771.75 1727.15 14199.47 00:38:01.476 [2024-12-07T10:50:00.830Z] =================================================================================================================== 00:38:01.476 [2024-12-07T10:50:00.830Z] Total : 8080.79 31.57 0.00 0.00 15771.75 1727.15 14199.47 00:38:02.047 11:50:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2782220 00:38:02.047 11:50:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:02.047 11:50:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:02.047 11:50:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:02.047 11:50:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:38:02.047 11:50:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:38:02.047 11:50:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:02.047 11:50:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:02.047 { 00:38:02.047 "params": { 00:38:02.047 "name": "Nvme$subsystem", 00:38:02.047 "trtype": "$TEST_TRANSPORT", 00:38:02.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:02.047 "adrfam": "ipv4", 00:38:02.047 "trsvcid": "$NVMF_PORT", 00:38:02.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:02.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:02.047 "hdgst": ${hdgst:-false}, 00:38:02.047 "ddgst": ${ddgst:-false} 00:38:02.047 }, 00:38:02.047 "method": "bdev_nvme_attach_controller" 00:38:02.047 } 00:38:02.047 EOF 00:38:02.047 )") 00:38:02.047 11:50:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:38:02.047 11:50:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:38:02.047 11:50:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:38:02.047 11:50:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:02.047 "params": { 00:38:02.047 "name": "Nvme1", 00:38:02.047 "trtype": "tcp", 00:38:02.047 "traddr": "10.0.0.2", 00:38:02.047 "adrfam": "ipv4", 00:38:02.047 "trsvcid": "4420", 00:38:02.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:02.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:02.047 "hdgst": false, 00:38:02.047 "ddgst": false 00:38:02.047 }, 00:38:02.047 "method": "bdev_nvme_attach_controller" 00:38:02.047 }' 00:38:02.308 [2024-12-07 11:50:01.401560] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:38:02.308 [2024-12-07 11:50:01.401662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782220 ] 00:38:02.308 [2024-12-07 11:50:01.525905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.308 [2024-12-07 11:50:01.624368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.880 Running I/O for 15 seconds... 00:38:04.770 10015.00 IOPS, 39.12 MiB/s [2024-12-07T10:50:04.388Z] 10032.00 IOPS, 39.19 MiB/s [2024-12-07T10:50:04.388Z] 11:50:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2781522 00:38:05.034 11:50:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:05.034 [2024-12-07 11:50:04.346965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:05.034 [2024-12-07 11:50:04.347025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:05.034 [2024-12-07 11:50:04.347077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:05.034 [2024-12-07 11:50:04.347104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:05.034 [2024-12-07 11:50:04.347130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:05.034 [2024-12-07 11:50:04.347156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:05.034 [2024-12-07 11:50:04.347180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:05.034 [2024-12-07 11:50:04.347605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.034 [2024-12-07 11:50:04.347734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.034 [2024-12-07 11:50:04.347745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.347759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.347770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.347783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.347793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.347805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.347816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.347830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.347843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.347856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.347866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.347882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.347893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.347907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.347919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.347932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.347943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.347956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.347966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.347979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.347991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:05.035 [2024-12-07 11:50:04.348745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:05.035 [2024-12-07 11:50:04.348771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:05.035 [2024-12-07 11:50:04.348796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:05.035 [2024-12-07 11:50:04.348820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.348978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.348989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:05.035 [2024-12-07 11:50:04.349398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.035 [2024-12-07 11:50:04.349552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.035 [2024-12-07 11:50:04.349562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.349985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.349995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.350008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.350024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.350038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.350048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.350061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.350071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.350084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.350095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.350108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.350118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.350131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:05.036 [2024-12-07 11:50:04.350142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.350154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039ec00 is same with the state(6) to be set 00:38:05.036 [2024-12-07 11:50:04.350169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:05.036 [2024-12-07 11:50:04.350179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:05.036 [2024-12-07 11:50:04.350191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49768 len:8 PRP1 0x0 PRP2 0x0 00:38:05.036 [2024-12-07 11:50:04.350203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:05.036 [2024-12-07 11:50:04.354176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.036 [2024-12-07 11:50:04.354259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.036 [2024-12-07 11:50:04.355223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.036 [2024-12-07 11:50:04.355271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.036 [2024-12-07 11:50:04.355290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.036 [2024-12-07 11:50:04.355564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.036 [2024-12-07 11:50:04.355810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.036 [2024-12-07 11:50:04.355824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.036 [2024-12-07 11:50:04.355847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.036 [2024-12-07 11:50:04.355861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.036 [2024-12-07 11:50:04.368751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.036 [2024-12-07 11:50:04.369337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.036 [2024-12-07 11:50:04.369363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.036 [2024-12-07 11:50:04.369375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.036 [2024-12-07 11:50:04.369614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.036 [2024-12-07 11:50:04.369851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.036 [2024-12-07 11:50:04.369863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.036 [2024-12-07 11:50:04.369873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.036 [2024-12-07 11:50:04.369883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.298 [2024-12-07 11:50:04.382966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.298 [2024-12-07 11:50:04.383535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.298 [2024-12-07 11:50:04.383583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.298 [2024-12-07 11:50:04.383598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.298 [2024-12-07 11:50:04.383869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.298 [2024-12-07 11:50:04.384123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.298 [2024-12-07 11:50:04.384139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.298 [2024-12-07 11:50:04.384150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.298 [2024-12-07 11:50:04.384162] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.298 [2024-12-07 11:50:04.397051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.298 [2024-12-07 11:50:04.397704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.298 [2024-12-07 11:50:04.397751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.298 [2024-12-07 11:50:04.397768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.298 [2024-12-07 11:50:04.398050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.298 [2024-12-07 11:50:04.398294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.298 [2024-12-07 11:50:04.398308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.298 [2024-12-07 11:50:04.398320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.298 [2024-12-07 11:50:04.398337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.298 [2024-12-07 11:50:04.411196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.299 [2024-12-07 11:50:04.411903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.299 [2024-12-07 11:50:04.411950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.299 [2024-12-07 11:50:04.411966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.299 [2024-12-07 11:50:04.412246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.299 [2024-12-07 11:50:04.412490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.299 [2024-12-07 11:50:04.412504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.299 [2024-12-07 11:50:04.412515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.299 [2024-12-07 11:50:04.412527] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.299 [2024-12-07 11:50:04.425425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.299 [2024-12-07 11:50:04.426092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.299 [2024-12-07 11:50:04.426139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.299 [2024-12-07 11:50:04.426155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.299 [2024-12-07 11:50:04.426425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.299 [2024-12-07 11:50:04.426668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.299 [2024-12-07 11:50:04.426682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.299 [2024-12-07 11:50:04.426693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.299 [2024-12-07 11:50:04.426705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.299 [2024-12-07 11:50:04.439587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.299 [2024-12-07 11:50:04.440320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.299 [2024-12-07 11:50:04.440367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.299 [2024-12-07 11:50:04.440383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.299 [2024-12-07 11:50:04.440654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.299 [2024-12-07 11:50:04.440896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.299 [2024-12-07 11:50:04.440910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.299 [2024-12-07 11:50:04.440921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.299 [2024-12-07 11:50:04.440933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.299 [2024-12-07 11:50:04.453795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.299 [2024-12-07 11:50:04.454512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.299 [2024-12-07 11:50:04.454560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.299 [2024-12-07 11:50:04.454576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.299 [2024-12-07 11:50:04.454846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.299 [2024-12-07 11:50:04.455097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.299 [2024-12-07 11:50:04.455112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.299 [2024-12-07 11:50:04.455123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.299 [2024-12-07 11:50:04.455135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.299 [2024-12-07 11:50:04.467994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.299 [2024-12-07 11:50:04.468710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.299 [2024-12-07 11:50:04.468764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.299 [2024-12-07 11:50:04.468780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.299 [2024-12-07 11:50:04.469059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.299 [2024-12-07 11:50:04.469304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.299 [2024-12-07 11:50:04.469318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.299 [2024-12-07 11:50:04.469329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.299 [2024-12-07 11:50:04.469340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.299 [2024-12-07 11:50:04.482201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.299 [2024-12-07 11:50:04.482919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.299 [2024-12-07 11:50:04.482966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.299 [2024-12-07 11:50:04.482983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.299 [2024-12-07 11:50:04.483263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.299 [2024-12-07 11:50:04.483507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.299 [2024-12-07 11:50:04.483521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.299 [2024-12-07 11:50:04.483532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.299 [2024-12-07 11:50:04.483544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.299 [2024-12-07 11:50:04.496445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.299 [2024-12-07 11:50:04.497095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.299 [2024-12-07 11:50:04.497142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.299 [2024-12-07 11:50:04.497164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.299 [2024-12-07 11:50:04.497437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.299 [2024-12-07 11:50:04.497679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.299 [2024-12-07 11:50:04.497694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.299 [2024-12-07 11:50:04.497705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.299 [2024-12-07 11:50:04.497716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.299 [2024-12-07 11:50:04.510601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.299 [2024-12-07 11:50:04.511318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.299 [2024-12-07 11:50:04.511365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.299 [2024-12-07 11:50:04.511381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.299 [2024-12-07 11:50:04.511652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.299 [2024-12-07 11:50:04.511895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.299 [2024-12-07 11:50:04.511909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.299 [2024-12-07 11:50:04.511920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.299 [2024-12-07 11:50:04.511938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.299 [2024-12-07 11:50:04.524839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.299 [2024-12-07 11:50:04.525426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.299 [2024-12-07 11:50:04.525474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.299 [2024-12-07 11:50:04.525492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.299 [2024-12-07 11:50:04.525763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.299 [2024-12-07 11:50:04.526006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.299 [2024-12-07 11:50:04.526037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.299 [2024-12-07 11:50:04.526054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.299 [2024-12-07 11:50:04.526066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.299 [2024-12-07 11:50:04.538935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.299 [2024-12-07 11:50:04.539650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.299 [2024-12-07 11:50:04.539696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.299 [2024-12-07 11:50:04.539712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.299 [2024-12-07 11:50:04.539982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.299 [2024-12-07 11:50:04.540240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.299 [2024-12-07 11:50:04.540255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.300 [2024-12-07 11:50:04.540267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.300 [2024-12-07 11:50:04.540278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.300 [2024-12-07 11:50:04.553136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.300 [2024-12-07 11:50:04.553836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.300 [2024-12-07 11:50:04.553883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.300 [2024-12-07 11:50:04.553899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.300 [2024-12-07 11:50:04.554177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.300 [2024-12-07 11:50:04.554420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.300 [2024-12-07 11:50:04.554435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.300 [2024-12-07 11:50:04.554446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.300 [2024-12-07 11:50:04.554458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.300 [2024-12-07 11:50:04.567319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.300 [2024-12-07 11:50:04.568038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.300 [2024-12-07 11:50:04.568085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.300 [2024-12-07 11:50:04.568102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.300 [2024-12-07 11:50:04.568372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.300 [2024-12-07 11:50:04.568615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.300 [2024-12-07 11:50:04.568629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.300 [2024-12-07 11:50:04.568640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.300 [2024-12-07 11:50:04.568651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.300 [2024-12-07 11:50:04.581522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.300 [2024-12-07 11:50:04.582300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.300 [2024-12-07 11:50:04.582347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.300 [2024-12-07 11:50:04.582363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.300 [2024-12-07 11:50:04.582633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.300 [2024-12-07 11:50:04.582877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.300 [2024-12-07 11:50:04.582891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.300 [2024-12-07 11:50:04.582907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.300 [2024-12-07 11:50:04.582918] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.300 [2024-12-07 11:50:04.595599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.300 [2024-12-07 11:50:04.596228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.300 [2024-12-07 11:50:04.596254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.300 [2024-12-07 11:50:04.596266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.300 [2024-12-07 11:50:04.596505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.300 [2024-12-07 11:50:04.596744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.300 [2024-12-07 11:50:04.596756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.300 [2024-12-07 11:50:04.596767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.300 [2024-12-07 11:50:04.596776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.300 [2024-12-07 11:50:04.609858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.300 [2024-12-07 11:50:04.610425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.300 [2024-12-07 11:50:04.610448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.300 [2024-12-07 11:50:04.610459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.300 [2024-12-07 11:50:04.610696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.300 [2024-12-07 11:50:04.610934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.300 [2024-12-07 11:50:04.610947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.300 [2024-12-07 11:50:04.610957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.300 [2024-12-07 11:50:04.610967] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.300 [2024-12-07 11:50:04.624069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.300 [2024-12-07 11:50:04.624639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.300 [2024-12-07 11:50:04.624662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.300 [2024-12-07 11:50:04.624673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.300 [2024-12-07 11:50:04.624911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.300 [2024-12-07 11:50:04.625154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.300 [2024-12-07 11:50:04.625167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.300 [2024-12-07 11:50:04.625177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.300 [2024-12-07 11:50:04.625188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.300 [2024-12-07 11:50:04.638288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.300 [2024-12-07 11:50:04.638875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.300 [2024-12-07 11:50:04.638898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.300 [2024-12-07 11:50:04.638909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.300 [2024-12-07 11:50:04.639155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.300 [2024-12-07 11:50:04.639394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.300 [2024-12-07 11:50:04.639406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.300 [2024-12-07 11:50:04.639416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.300 [2024-12-07 11:50:04.639425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.562 [2024-12-07 11:50:04.652511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.562 [2024-12-07 11:50:04.653120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.562 [2024-12-07 11:50:04.653168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.562 [2024-12-07 11:50:04.653186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.562 [2024-12-07 11:50:04.653455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.562 [2024-12-07 11:50:04.653698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.562 [2024-12-07 11:50:04.653712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.562 [2024-12-07 11:50:04.653724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.562 [2024-12-07 11:50:04.653735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.562 [2024-12-07 11:50:04.666602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.562 [2024-12-07 11:50:04.667277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.562 [2024-12-07 11:50:04.667325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.563 [2024-12-07 11:50:04.667340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.563 [2024-12-07 11:50:04.667618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.563 [2024-12-07 11:50:04.667861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.563 [2024-12-07 11:50:04.667875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.563 [2024-12-07 11:50:04.667886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.563 [2024-12-07 11:50:04.667897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.563 [2024-12-07 11:50:04.680843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.563 [2024-12-07 11:50:04.681560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.563 [2024-12-07 11:50:04.681612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.563 [2024-12-07 11:50:04.681628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.563 [2024-12-07 11:50:04.681898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.563 [2024-12-07 11:50:04.682149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.563 [2024-12-07 11:50:04.682164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.563 [2024-12-07 11:50:04.682177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.563 [2024-12-07 11:50:04.682188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.563 [2024-12-07 11:50:04.695071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.563 [2024-12-07 11:50:04.695707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.563 [2024-12-07 11:50:04.695754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.563 [2024-12-07 11:50:04.695770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.563 [2024-12-07 11:50:04.696048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.563 [2024-12-07 11:50:04.696291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.563 [2024-12-07 11:50:04.696305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.563 [2024-12-07 11:50:04.696316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.563 [2024-12-07 11:50:04.696328] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.563 [2024-12-07 11:50:04.709181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.563 [2024-12-07 11:50:04.709815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.563 [2024-12-07 11:50:04.709841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.563 [2024-12-07 11:50:04.709853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.563 [2024-12-07 11:50:04.710097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.563 [2024-12-07 11:50:04.710335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.563 [2024-12-07 11:50:04.710348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.563 [2024-12-07 11:50:04.710358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.563 [2024-12-07 11:50:04.710368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.563 [2024-12-07 11:50:04.723456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.563 [2024-12-07 11:50:04.724049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.563 [2024-12-07 11:50:04.724072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.563 [2024-12-07 11:50:04.724083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.563 [2024-12-07 11:50:04.724325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.563 [2024-12-07 11:50:04.724563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.563 [2024-12-07 11:50:04.724575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.563 [2024-12-07 11:50:04.724585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.563 [2024-12-07 11:50:04.724595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.563 [2024-12-07 11:50:04.737683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.563 [2024-12-07 11:50:04.738340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.563 [2024-12-07 11:50:04.738388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.563 [2024-12-07 11:50:04.738404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.563 [2024-12-07 11:50:04.738674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.563 [2024-12-07 11:50:04.738916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.563 [2024-12-07 11:50:04.738931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.563 [2024-12-07 11:50:04.738942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.563 [2024-12-07 11:50:04.738953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.563 [2024-12-07 11:50:04.751866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.563 [2024-12-07 11:50:04.752543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.563 [2024-12-07 11:50:04.752590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.563 [2024-12-07 11:50:04.752605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.563 [2024-12-07 11:50:04.752876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.563 [2024-12-07 11:50:04.753129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.563 [2024-12-07 11:50:04.753144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.563 [2024-12-07 11:50:04.753156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.563 [2024-12-07 11:50:04.753168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.563 [2024-12-07 11:50:04.766022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.563 [2024-12-07 11:50:04.766622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.563 [2024-12-07 11:50:04.766669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.563 [2024-12-07 11:50:04.766685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.563 [2024-12-07 11:50:04.766955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.563 [2024-12-07 11:50:04.767209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.563 [2024-12-07 11:50:04.767229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.563 [2024-12-07 11:50:04.767240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.563 [2024-12-07 11:50:04.767252] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.563 [2024-12-07 11:50:04.780148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.563 [2024-12-07 11:50:04.780687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.563 [2024-12-07 11:50:04.780734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.563 [2024-12-07 11:50:04.780751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.563 [2024-12-07 11:50:04.781031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.563 [2024-12-07 11:50:04.781275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.563 [2024-12-07 11:50:04.781289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.563 [2024-12-07 11:50:04.781300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.563 [2024-12-07 11:50:04.781312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.563 [2024-12-07 11:50:04.794201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.563 [2024-12-07 11:50:04.794925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.563 [2024-12-07 11:50:04.794972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.563 [2024-12-07 11:50:04.794987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.563 [2024-12-07 11:50:04.795268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.563 [2024-12-07 11:50:04.795512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.563 [2024-12-07 11:50:04.795525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.563 [2024-12-07 11:50:04.795537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.563 [2024-12-07 11:50:04.795548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.564 [2024-12-07 11:50:04.808403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.564 [2024-12-07 11:50:04.809097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.564 [2024-12-07 11:50:04.809144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.564 [2024-12-07 11:50:04.809159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.564 [2024-12-07 11:50:04.809430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.564 [2024-12-07 11:50:04.809673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.564 [2024-12-07 11:50:04.809686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.564 [2024-12-07 11:50:04.809698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.564 [2024-12-07 11:50:04.809715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.564 [2024-12-07 11:50:04.822594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.564 [2024-12-07 11:50:04.823315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.564 [2024-12-07 11:50:04.823362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.564 [2024-12-07 11:50:04.823378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.564 [2024-12-07 11:50:04.823648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.564 [2024-12-07 11:50:04.823891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.564 [2024-12-07 11:50:04.823905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.564 [2024-12-07 11:50:04.823916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.564 [2024-12-07 11:50:04.823928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.564 [2024-12-07 11:50:04.836798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.564 [2024-12-07 11:50:04.837415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.564 [2024-12-07 11:50:04.837463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.564 [2024-12-07 11:50:04.837480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.564 [2024-12-07 11:50:04.837750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.564 [2024-12-07 11:50:04.837993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.564 [2024-12-07 11:50:04.838007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.564 [2024-12-07 11:50:04.838030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.564 [2024-12-07 11:50:04.838042] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.564 [2024-12-07 11:50:04.850900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.564 [2024-12-07 11:50:04.851588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.564 [2024-12-07 11:50:04.851635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.564 [2024-12-07 11:50:04.851651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.564 [2024-12-07 11:50:04.851921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.564 [2024-12-07 11:50:04.852177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.564 [2024-12-07 11:50:04.852192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.564 [2024-12-07 11:50:04.852203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.564 [2024-12-07 11:50:04.852215] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.564 [2024-12-07 11:50:04.865092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.564 [2024-12-07 11:50:04.865796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.564 [2024-12-07 11:50:04.865843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.564 [2024-12-07 11:50:04.865860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.564 [2024-12-07 11:50:04.866141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.564 [2024-12-07 11:50:04.866386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.564 [2024-12-07 11:50:04.866425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.564 [2024-12-07 11:50:04.866436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.564 [2024-12-07 11:50:04.866448] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.564 [2024-12-07 11:50:04.879330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.564 [2024-12-07 11:50:04.879947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.564 [2024-12-07 11:50:04.879972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.564 [2024-12-07 11:50:04.879984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.564 [2024-12-07 11:50:04.880229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.564 [2024-12-07 11:50:04.880467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.564 [2024-12-07 11:50:04.880480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.564 [2024-12-07 11:50:04.880490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.564 [2024-12-07 11:50:04.880500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.564 [2024-12-07 11:50:04.893377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.564 [2024-12-07 11:50:04.894085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.564 [2024-12-07 11:50:04.894132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.564 [2024-12-07 11:50:04.894147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.564 [2024-12-07 11:50:04.894417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.564 [2024-12-07 11:50:04.894660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.564 [2024-12-07 11:50:04.894674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.564 [2024-12-07 11:50:04.894685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.564 [2024-12-07 11:50:04.894697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.564 [2024-12-07 11:50:04.907564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.564 [2024-12-07 11:50:04.908139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.564 [2024-12-07 11:50:04.908165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.564 [2024-12-07 11:50:04.908182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.564 [2024-12-07 11:50:04.908421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.564 [2024-12-07 11:50:04.908659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.564 [2024-12-07 11:50:04.908671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.564 [2024-12-07 11:50:04.908681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.564 [2024-12-07 11:50:04.908691] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.827 [2024-12-07 11:50:04.921776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.827 [2024-12-07 11:50:04.922450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.827 [2024-12-07 11:50:04.922498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.827 [2024-12-07 11:50:04.922513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.827 [2024-12-07 11:50:04.922784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.827 [2024-12-07 11:50:04.923038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.827 [2024-12-07 11:50:04.923054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.827 [2024-12-07 11:50:04.923065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.827 [2024-12-07 11:50:04.923077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.827 [2024-12-07 11:50:04.935954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.827 [2024-12-07 11:50:04.936681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.827 [2024-12-07 11:50:04.936728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.827 [2024-12-07 11:50:04.936744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.827 [2024-12-07 11:50:04.937024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.827 [2024-12-07 11:50:04.937267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.827 [2024-12-07 11:50:04.937282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.827 [2024-12-07 11:50:04.937293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.827 [2024-12-07 11:50:04.937304] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.827 [2024-12-07 11:50:04.950171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.827 [2024-12-07 11:50:04.950884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.827 [2024-12-07 11:50:04.950931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.827 [2024-12-07 11:50:04.950948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.827 [2024-12-07 11:50:04.951234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.827 [2024-12-07 11:50:04.951478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.827 [2024-12-07 11:50:04.951492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.827 [2024-12-07 11:50:04.951503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.827 [2024-12-07 11:50:04.951515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.827 [2024-12-07 11:50:04.964413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.827 [2024-12-07 11:50:04.964979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.827 [2024-12-07 11:50:04.965004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.827 [2024-12-07 11:50:04.965025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.827 [2024-12-07 11:50:04.965274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.827 [2024-12-07 11:50:04.965512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.827 [2024-12-07 11:50:04.965525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.827 [2024-12-07 11:50:04.965534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.827 [2024-12-07 11:50:04.965544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.827 [2024-12-07 11:50:04.978633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.827 [2024-12-07 11:50:04.979300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.827 [2024-12-07 11:50:04.979347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.827 [2024-12-07 11:50:04.979363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.827 [2024-12-07 11:50:04.979633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.828 [2024-12-07 11:50:04.979876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.828 [2024-12-07 11:50:04.979890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.828 [2024-12-07 11:50:04.979901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.828 [2024-12-07 11:50:04.979913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.828 [2024-12-07 11:50:04.992802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.828 [2024-12-07 11:50:04.993519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.828 [2024-12-07 11:50:04.993567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.828 [2024-12-07 11:50:04.993583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.828 [2024-12-07 11:50:04.993852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.828 [2024-12-07 11:50:04.994104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.828 [2024-12-07 11:50:04.994124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.828 [2024-12-07 11:50:04.994136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.828 [2024-12-07 11:50:04.994147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.828 [2024-12-07 11:50:05.007015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.828 [2024-12-07 11:50:05.007696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.828 [2024-12-07 11:50:05.007743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.828 [2024-12-07 11:50:05.007759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.828 [2024-12-07 11:50:05.008040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.828 [2024-12-07 11:50:05.008283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.828 [2024-12-07 11:50:05.008298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.828 [2024-12-07 11:50:05.008309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.828 [2024-12-07 11:50:05.008321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.828 [2024-12-07 11:50:05.021175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.828 [2024-12-07 11:50:05.021868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.828 [2024-12-07 11:50:05.021915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.828 [2024-12-07 11:50:05.021931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.828 [2024-12-07 11:50:05.022229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.828 [2024-12-07 11:50:05.022474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.828 [2024-12-07 11:50:05.022488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.828 [2024-12-07 11:50:05.022500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.828 [2024-12-07 11:50:05.022511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.828 [2024-12-07 11:50:05.035387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.828 [2024-12-07 11:50:05.035955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.828 [2024-12-07 11:50:05.035980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.828 [2024-12-07 11:50:05.035992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.828 [2024-12-07 11:50:05.036237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.828 [2024-12-07 11:50:05.036476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.828 [2024-12-07 11:50:05.036489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.828 [2024-12-07 11:50:05.036499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.828 [2024-12-07 11:50:05.036513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.828 [2024-12-07 11:50:05.049587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.828 [2024-12-07 11:50:05.050194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.828 [2024-12-07 11:50:05.050218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.828 [2024-12-07 11:50:05.050230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.828 [2024-12-07 11:50:05.050467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.828 [2024-12-07 11:50:05.050705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.828 [2024-12-07 11:50:05.050717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.828 [2024-12-07 11:50:05.050727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.828 [2024-12-07 11:50:05.050736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.828 [2024-12-07 11:50:05.063817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.828 [2024-12-07 11:50:05.064271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.828 [2024-12-07 11:50:05.064296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.828 [2024-12-07 11:50:05.064307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.828 [2024-12-07 11:50:05.064544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.828 [2024-12-07 11:50:05.064782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.828 [2024-12-07 11:50:05.064795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.828 [2024-12-07 11:50:05.064805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.828 [2024-12-07 11:50:05.064821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.828 [2024-12-07 11:50:05.077902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.828 [2024-12-07 11:50:05.078502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.828 [2024-12-07 11:50:05.078526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.828 [2024-12-07 11:50:05.078537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.828 [2024-12-07 11:50:05.078774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.828 [2024-12-07 11:50:05.079017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.828 [2024-12-07 11:50:05.079030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.828 [2024-12-07 11:50:05.079040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.828 [2024-12-07 11:50:05.079050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.828 7496.67 IOPS, 29.28 MiB/s [2024-12-07T10:50:05.182Z] [2024-12-07 11:50:05.093883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.828 [2024-12-07 11:50:05.094588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.828 [2024-12-07 11:50:05.094635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.828 [2024-12-07 11:50:05.094650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.828 [2024-12-07 11:50:05.094920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.828 [2024-12-07 11:50:05.095174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.828 [2024-12-07 11:50:05.095189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.828 [2024-12-07 11:50:05.095200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.828 [2024-12-07 11:50:05.095212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.828 [2024-12-07 11:50:05.108084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.828 [2024-12-07 11:50:05.108798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.828 [2024-12-07 11:50:05.108845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.828 [2024-12-07 11:50:05.108861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.828 [2024-12-07 11:50:05.109142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.828 [2024-12-07 11:50:05.109386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.828 [2024-12-07 11:50:05.109400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.828 [2024-12-07 11:50:05.109411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.828 [2024-12-07 11:50:05.109422] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.828 [2024-12-07 11:50:05.122296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.828 [2024-12-07 11:50:05.122930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.829 [2024-12-07 11:50:05.122955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.829 [2024-12-07 11:50:05.122967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.829 [2024-12-07 11:50:05.123213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.829 [2024-12-07 11:50:05.123452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.829 [2024-12-07 11:50:05.123464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.829 [2024-12-07 11:50:05.123474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.829 [2024-12-07 11:50:05.123484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.829 [2024-12-07 11:50:05.136349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.829 [2024-12-07 11:50:05.137044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.829 [2024-12-07 11:50:05.137092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.829 [2024-12-07 11:50:05.137112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.829 [2024-12-07 11:50:05.137382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.829 [2024-12-07 11:50:05.137625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.829 [2024-12-07 11:50:05.137639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.829 [2024-12-07 11:50:05.137651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.829 [2024-12-07 11:50:05.137662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.829 [2024-12-07 11:50:05.150535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.829 [2024-12-07 11:50:05.151202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.829 [2024-12-07 11:50:05.151249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.829 [2024-12-07 11:50:05.151264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.829 [2024-12-07 11:50:05.151535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.829 [2024-12-07 11:50:05.151778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.829 [2024-12-07 11:50:05.151792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.829 [2024-12-07 11:50:05.151803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.829 [2024-12-07 11:50:05.151815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:05.829 [2024-12-07 11:50:05.164694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:05.829 [2024-12-07 11:50:05.165367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.829 [2024-12-07 11:50:05.165414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:05.829 [2024-12-07 11:50:05.165430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:05.829 [2024-12-07 11:50:05.165700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:05.829 [2024-12-07 11:50:05.165943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:05.829 [2024-12-07 11:50:05.165957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:05.829 [2024-12-07 11:50:05.165968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:05.829 [2024-12-07 11:50:05.165980] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.097 [2024-12-07 11:50:05.178851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.097 [2024-12-07 11:50:05.179517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.097 [2024-12-07 11:50:05.179564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.097 [2024-12-07 11:50:05.179579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.097 [2024-12-07 11:50:05.179850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.097 [2024-12-07 11:50:05.180110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.097 [2024-12-07 11:50:05.180126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.097 [2024-12-07 11:50:05.180137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.097 [2024-12-07 11:50:05.180148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.097 [2024-12-07 11:50:05.193061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.097 [2024-12-07 11:50:05.193681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.097 [2024-12-07 11:50:05.193706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.097 [2024-12-07 11:50:05.193718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.097 [2024-12-07 11:50:05.193956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.097 [2024-12-07 11:50:05.194204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.097 [2024-12-07 11:50:05.194217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.097 [2024-12-07 11:50:05.194227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.097 [2024-12-07 11:50:05.194237] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.097 [2024-12-07 11:50:05.207110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.097 [2024-12-07 11:50:05.207556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.097 [2024-12-07 11:50:05.207581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.097 [2024-12-07 11:50:05.207592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.097 [2024-12-07 11:50:05.207829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.097 [2024-12-07 11:50:05.208073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.097 [2024-12-07 11:50:05.208086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.097 [2024-12-07 11:50:05.208096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.097 [2024-12-07 11:50:05.208105] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.097 [2024-12-07 11:50:05.221202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.097 [2024-12-07 11:50:05.221876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.098 [2024-12-07 11:50:05.221924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.098 [2024-12-07 11:50:05.221941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.098 [2024-12-07 11:50:05.222228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.098 [2024-12-07 11:50:05.222480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.098 [2024-12-07 11:50:05.222496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.098 [2024-12-07 11:50:05.222512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.098 [2024-12-07 11:50:05.222524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.098 [2024-12-07 11:50:05.235416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.098 [2024-12-07 11:50:05.236056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.098 [2024-12-07 11:50:05.236081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.098 [2024-12-07 11:50:05.236094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.098 [2024-12-07 11:50:05.236331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.098 [2024-12-07 11:50:05.236569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.098 [2024-12-07 11:50:05.236583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.098 [2024-12-07 11:50:05.236593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.098 [2024-12-07 11:50:05.236603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.098 [2024-12-07 11:50:05.249464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.098 [2024-12-07 11:50:05.250235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.098 [2024-12-07 11:50:05.250282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.098 [2024-12-07 11:50:05.250298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.098 [2024-12-07 11:50:05.250568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.098 [2024-12-07 11:50:05.250812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.098 [2024-12-07 11:50:05.250826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.098 [2024-12-07 11:50:05.250837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.098 [2024-12-07 11:50:05.250848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.098 [2024-12-07 11:50:05.263525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.098 [2024-12-07 11:50:05.264127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.098 [2024-12-07 11:50:05.264174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.098 [2024-12-07 11:50:05.264191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.098 [2024-12-07 11:50:05.264464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.098 [2024-12-07 11:50:05.264708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.098 [2024-12-07 11:50:05.264721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.098 [2024-12-07 11:50:05.264732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.098 [2024-12-07 11:50:05.264744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.098 [2024-12-07 11:50:05.277621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.098 [2024-12-07 11:50:05.278308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.098 [2024-12-07 11:50:05.278355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.098 [2024-12-07 11:50:05.278371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.098 [2024-12-07 11:50:05.278641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.098 [2024-12-07 11:50:05.278884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.098 [2024-12-07 11:50:05.278898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.098 [2024-12-07 11:50:05.278909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.098 [2024-12-07 11:50:05.278921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.098 [2024-12-07 11:50:05.291859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.098 [2024-12-07 11:50:05.292521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.098 [2024-12-07 11:50:05.292568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.098 [2024-12-07 11:50:05.292583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.098 [2024-12-07 11:50:05.292853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.098 [2024-12-07 11:50:05.293106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.098 [2024-12-07 11:50:05.293121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.098 [2024-12-07 11:50:05.293133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.098 [2024-12-07 11:50:05.293145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.098 [2024-12-07 11:50:05.306035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.098 [2024-12-07 11:50:05.306704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.098 [2024-12-07 11:50:05.306752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.098 [2024-12-07 11:50:05.306768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.098 [2024-12-07 11:50:05.307049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.098 [2024-12-07 11:50:05.307293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.098 [2024-12-07 11:50:05.307307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.098 [2024-12-07 11:50:05.307318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.098 [2024-12-07 11:50:05.307329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.098 [2024-12-07 11:50:05.320221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.098 [2024-12-07 11:50:05.320803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.098 [2024-12-07 11:50:05.320829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.098 [2024-12-07 11:50:05.320841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.098 [2024-12-07 11:50:05.321088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.098 [2024-12-07 11:50:05.321326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.098 [2024-12-07 11:50:05.321339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.098 [2024-12-07 11:50:05.321349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.098 [2024-12-07 11:50:05.321359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.098 [2024-12-07 11:50:05.334472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.098 [2024-12-07 11:50:05.335211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.098 [2024-12-07 11:50:05.335259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.098 [2024-12-07 11:50:05.335275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.098 [2024-12-07 11:50:05.335545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.098 [2024-12-07 11:50:05.335788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.098 [2024-12-07 11:50:05.335802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.098 [2024-12-07 11:50:05.335813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.098 [2024-12-07 11:50:05.335824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.098 [2024-12-07 11:50:05.348722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.098 [2024-12-07 11:50:05.349400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.098 [2024-12-07 11:50:05.349448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.098 [2024-12-07 11:50:05.349464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.098 [2024-12-07 11:50:05.349735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.098 [2024-12-07 11:50:05.349978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.098 [2024-12-07 11:50:05.349994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.098 [2024-12-07 11:50:05.350007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.099 [2024-12-07 11:50:05.350030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.099 [2024-12-07 11:50:05.362903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.099 [2024-12-07 11:50:05.363526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.099 [2024-12-07 11:50:05.363552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.099 [2024-12-07 11:50:05.363564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.099 [2024-12-07 11:50:05.363807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.099 [2024-12-07 11:50:05.364052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.099 [2024-12-07 11:50:05.364066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.099 [2024-12-07 11:50:05.364076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.099 [2024-12-07 11:50:05.364086] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.099 [2024-12-07 11:50:05.377052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.099 [2024-12-07 11:50:05.377609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.099 [2024-12-07 11:50:05.377633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.099 [2024-12-07 11:50:05.377644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.099 [2024-12-07 11:50:05.377882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.099 [2024-12-07 11:50:05.378126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.099 [2024-12-07 11:50:05.378139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.099 [2024-12-07 11:50:05.378149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.099 [2024-12-07 11:50:05.378159] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.099 [2024-12-07 11:50:05.391265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.099 [2024-12-07 11:50:05.391868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.099 [2024-12-07 11:50:05.391892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.099 [2024-12-07 11:50:05.391903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.099 [2024-12-07 11:50:05.392146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.099 [2024-12-07 11:50:05.392384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.099 [2024-12-07 11:50:05.392396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.099 [2024-12-07 11:50:05.392406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.099 [2024-12-07 11:50:05.392415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.099 [2024-12-07 11:50:05.405500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.099 [2024-12-07 11:50:05.406206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.099 [2024-12-07 11:50:05.406253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.099 [2024-12-07 11:50:05.406269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.099 [2024-12-07 11:50:05.406540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.099 [2024-12-07 11:50:05.406791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.099 [2024-12-07 11:50:05.406806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.099 [2024-12-07 11:50:05.406818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.099 [2024-12-07 11:50:05.406829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.099 [2024-12-07 11:50:05.419722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.099 [2024-12-07 11:50:05.420393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.099 [2024-12-07 11:50:05.420440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.099 [2024-12-07 11:50:05.420455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.099 [2024-12-07 11:50:05.420725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.099 [2024-12-07 11:50:05.420968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.099 [2024-12-07 11:50:05.420982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.099 [2024-12-07 11:50:05.420994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.099 [2024-12-07 11:50:05.421006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.099 [2024-12-07 11:50:05.433909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.099 [2024-12-07 11:50:05.434503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.099 [2024-12-07 11:50:05.434529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.099 [2024-12-07 11:50:05.434541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.099 [2024-12-07 11:50:05.434779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.099 [2024-12-07 11:50:05.435022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.099 [2024-12-07 11:50:05.435035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.099 [2024-12-07 11:50:05.435046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.099 [2024-12-07 11:50:05.435055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.402 [2024-12-07 11:50:05.448219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.402 [2024-12-07 11:50:05.448701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.402 [2024-12-07 11:50:05.448724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.402 [2024-12-07 11:50:05.448735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.402 [2024-12-07 11:50:05.448972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.402 [2024-12-07 11:50:05.449216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.402 [2024-12-07 11:50:05.449233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.402 [2024-12-07 11:50:05.449247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.402 [2024-12-07 11:50:05.449257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.402 [2024-12-07 11:50:05.462381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.402 [2024-12-07 11:50:05.463085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.402 [2024-12-07 11:50:05.463132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.402 [2024-12-07 11:50:05.463149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.402 [2024-12-07 11:50:05.463422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.402 [2024-12-07 11:50:05.463664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.402 [2024-12-07 11:50:05.463679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.402 [2024-12-07 11:50:05.463690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.402 [2024-12-07 11:50:05.463702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.402 [2024-12-07 11:50:05.476576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.402 [2024-12-07 11:50:05.477132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.402 [2024-12-07 11:50:05.477180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.402 [2024-12-07 11:50:05.477205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.402 [2024-12-07 11:50:05.477475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.402 [2024-12-07 11:50:05.477718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.402 [2024-12-07 11:50:05.477732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.402 [2024-12-07 11:50:05.477743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.402 [2024-12-07 11:50:05.477755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.402 [2024-12-07 11:50:05.490637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.402 [2024-12-07 11:50:05.491236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.402 [2024-12-07 11:50:05.491262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.402 [2024-12-07 11:50:05.491274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.402 [2024-12-07 11:50:05.491513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.402 [2024-12-07 11:50:05.491750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.402 [2024-12-07 11:50:05.491763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.402 [2024-12-07 11:50:05.491773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.402 [2024-12-07 11:50:05.491783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.402 [2024-12-07 11:50:05.504862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.402 [2024-12-07 11:50:05.505512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.402 [2024-12-07 11:50:05.505560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.402 [2024-12-07 11:50:05.505576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.402 [2024-12-07 11:50:05.505846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.402 [2024-12-07 11:50:05.506097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.402 [2024-12-07 11:50:05.506112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.402 [2024-12-07 11:50:05.506124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.402 [2024-12-07 11:50:05.506136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.402 [2024-12-07 11:50:05.519013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.402 [2024-12-07 11:50:05.519628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.402 [2024-12-07 11:50:05.519653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.402 [2024-12-07 11:50:05.519665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.402 [2024-12-07 11:50:05.519903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.402 [2024-12-07 11:50:05.520149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.402 [2024-12-07 11:50:05.520162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.402 [2024-12-07 11:50:05.520172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.402 [2024-12-07 11:50:05.520182] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.402 [2024-12-07 11:50:05.533068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.402 [2024-12-07 11:50:05.533771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.402 [2024-12-07 11:50:05.533819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.402 [2024-12-07 11:50:05.533834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.402 [2024-12-07 11:50:05.534115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.402 [2024-12-07 11:50:05.534358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.402 [2024-12-07 11:50:05.534373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.402 [2024-12-07 11:50:05.534384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.403 [2024-12-07 11:50:05.534395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.403 [2024-12-07 11:50:05.547293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.403 [2024-12-07 11:50:05.547988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.403 [2024-12-07 11:50:05.548048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.403 [2024-12-07 11:50:05.548064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.403 [2024-12-07 11:50:05.548335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.403 [2024-12-07 11:50:05.548578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.403 [2024-12-07 11:50:05.548592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.403 [2024-12-07 11:50:05.548603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.403 [2024-12-07 11:50:05.548615] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.403 [2024-12-07 11:50:05.561516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.403 [2024-12-07 11:50:05.562111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.403 [2024-12-07 11:50:05.562136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.403 [2024-12-07 11:50:05.562148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.403 [2024-12-07 11:50:05.562387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.403 [2024-12-07 11:50:05.562624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.403 [2024-12-07 11:50:05.562637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.403 [2024-12-07 11:50:05.562647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.403 [2024-12-07 11:50:05.562657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.403 [2024-12-07 11:50:05.575765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.403 [2024-12-07 11:50:05.576465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.403 [2024-12-07 11:50:05.576513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.403 [2024-12-07 11:50:05.576529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.403 [2024-12-07 11:50:05.576799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.403 [2024-12-07 11:50:05.577052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.403 [2024-12-07 11:50:05.577067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.403 [2024-12-07 11:50:05.577078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.403 [2024-12-07 11:50:05.577090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.403 [2024-12-07 11:50:05.589972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.403 [2024-12-07 11:50:05.590547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.403 [2024-12-07 11:50:05.590573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.403 [2024-12-07 11:50:05.590585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.403 [2024-12-07 11:50:05.590842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.403 [2024-12-07 11:50:05.591089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.403 [2024-12-07 11:50:05.591103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.403 [2024-12-07 11:50:05.591113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.403 [2024-12-07 11:50:05.591122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.403 [2024-12-07 11:50:05.604217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.403 [2024-12-07 11:50:05.604853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.403 [2024-12-07 11:50:05.604901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.403 [2024-12-07 11:50:05.604916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.403 [2024-12-07 11:50:05.605197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.403 [2024-12-07 11:50:05.605440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.403 [2024-12-07 11:50:05.605455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.403 [2024-12-07 11:50:05.605466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.403 [2024-12-07 11:50:05.605477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.403 [2024-12-07 11:50:05.618357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.403 [2024-12-07 11:50:05.618977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.403 [2024-12-07 11:50:05.619003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.403 [2024-12-07 11:50:05.619021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.403 [2024-12-07 11:50:05.619260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.403 [2024-12-07 11:50:05.619498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.403 [2024-12-07 11:50:05.619510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.403 [2024-12-07 11:50:05.619520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.403 [2024-12-07 11:50:05.619530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.403 [2024-12-07 11:50:05.632426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.403 [2024-12-07 11:50:05.633074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.403 [2024-12-07 11:50:05.633122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.403 [2024-12-07 11:50:05.633137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.403 [2024-12-07 11:50:05.633407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.403 [2024-12-07 11:50:05.633650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.403 [2024-12-07 11:50:05.633669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.403 [2024-12-07 11:50:05.633680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.403 [2024-12-07 11:50:05.633692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.403 [2024-12-07 11:50:05.646581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.403 [2024-12-07 11:50:05.647168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.403 [2024-12-07 11:50:05.647195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.403 [2024-12-07 11:50:05.647207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.403 [2024-12-07 11:50:05.647445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.403 [2024-12-07 11:50:05.647683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.403 [2024-12-07 11:50:05.647695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.403 [2024-12-07 11:50:05.647705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.403 [2024-12-07 11:50:05.647715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.403 [2024-12-07 11:50:05.660823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.403 [2024-12-07 11:50:05.661535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.403 [2024-12-07 11:50:05.661582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.403 [2024-12-07 11:50:05.661597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.403 [2024-12-07 11:50:05.661867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.403 [2024-12-07 11:50:05.662121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.403 [2024-12-07 11:50:05.662136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.403 [2024-12-07 11:50:05.662147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.403 [2024-12-07 11:50:05.662158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.403 [2024-12-07 11:50:05.675035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.403 [2024-12-07 11:50:05.675536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.403 [2024-12-07 11:50:05.675561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.403 [2024-12-07 11:50:05.675573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.403 [2024-12-07 11:50:05.675810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.404 [2024-12-07 11:50:05.676064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.404 [2024-12-07 11:50:05.676078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.404 [2024-12-07 11:50:05.676088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.404 [2024-12-07 11:50:05.676102] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.404 [2024-12-07 11:50:05.689196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.404 [2024-12-07 11:50:05.689801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.404 [2024-12-07 11:50:05.689825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.404 [2024-12-07 11:50:05.689836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.404 [2024-12-07 11:50:05.690080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.404 [2024-12-07 11:50:05.690317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.404 [2024-12-07 11:50:05.690331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.404 [2024-12-07 11:50:05.690341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.404 [2024-12-07 11:50:05.690350] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.404 [2024-12-07 11:50:05.703319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.404 [2024-12-07 11:50:05.703878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.404 [2024-12-07 11:50:05.703901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.404 [2024-12-07 11:50:05.703912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.404 [2024-12-07 11:50:05.704158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.404 [2024-12-07 11:50:05.704396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.404 [2024-12-07 11:50:05.704408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.404 [2024-12-07 11:50:05.704418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.404 [2024-12-07 11:50:05.704428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.404 [2024-12-07 11:50:05.717520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.404 [2024-12-07 11:50:05.718139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.404 [2024-12-07 11:50:05.718187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.404 [2024-12-07 11:50:05.718204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.404 [2024-12-07 11:50:05.718474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.404 [2024-12-07 11:50:05.718717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.404 [2024-12-07 11:50:05.718731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.404 [2024-12-07 11:50:05.718742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.404 [2024-12-07 11:50:05.718754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.404 [2024-12-07 11:50:05.731664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.404 [2024-12-07 11:50:05.732276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.404 [2024-12-07 11:50:05.732302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.404 [2024-12-07 11:50:05.732314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.404 [2024-12-07 11:50:05.732553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.404 [2024-12-07 11:50:05.732791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.404 [2024-12-07 11:50:05.732803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.404 [2024-12-07 11:50:05.732814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.404 [2024-12-07 11:50:05.732823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.704 [2024-12-07 11:50:05.745899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.704 [2024-12-07 11:50:05.746511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.704 [2024-12-07 11:50:05.746558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.704 [2024-12-07 11:50:05.746575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.704 [2024-12-07 11:50:05.746846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.704 [2024-12-07 11:50:05.747098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.704 [2024-12-07 11:50:05.747114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.704 [2024-12-07 11:50:05.747125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.704 [2024-12-07 11:50:05.747137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.704 [2024-12-07 11:50:05.760003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.704 [2024-12-07 11:50:05.760653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.704 [2024-12-07 11:50:05.760700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.704 [2024-12-07 11:50:05.760716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.704 [2024-12-07 11:50:05.760986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.704 [2024-12-07 11:50:05.761240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.704 [2024-12-07 11:50:05.761255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.704 [2024-12-07 11:50:05.761267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.704 [2024-12-07 11:50:05.761278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.704 [2024-12-07 11:50:05.774151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.704 [2024-12-07 11:50:05.774568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.704 [2024-12-07 11:50:05.774593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.704 [2024-12-07 11:50:05.774610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.704 [2024-12-07 11:50:05.774849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.704 [2024-12-07 11:50:05.775094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.704 [2024-12-07 11:50:05.775107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.704 [2024-12-07 11:50:05.775118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.704 [2024-12-07 11:50:05.775128] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.704 [2024-12-07 11:50:05.788254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.704 [2024-12-07 11:50:05.788860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.704 [2024-12-07 11:50:05.788884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.704 [2024-12-07 11:50:05.788895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.704 [2024-12-07 11:50:05.789141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.704 [2024-12-07 11:50:05.789378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.704 [2024-12-07 11:50:05.789391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.704 [2024-12-07 11:50:05.789401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.704 [2024-12-07 11:50:05.789410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.704 [2024-12-07 11:50:05.802316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.704 [2024-12-07 11:50:05.802883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.704 [2024-12-07 11:50:05.802907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.704 [2024-12-07 11:50:05.802918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.704 [2024-12-07 11:50:05.803162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.704 [2024-12-07 11:50:05.803400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.704 [2024-12-07 11:50:05.803413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.704 [2024-12-07 11:50:05.803423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.704 [2024-12-07 11:50:05.803432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.704 [2024-12-07 11:50:05.816522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.704 [2024-12-07 11:50:05.816961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.704 [2024-12-07 11:50:05.816986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.704 [2024-12-07 11:50:05.816997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.704 [2024-12-07 11:50:05.817249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.704 [2024-12-07 11:50:05.817493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.704 [2024-12-07 11:50:05.817506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.705 [2024-12-07 11:50:05.817516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.705 [2024-12-07 11:50:05.817526] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.705 [2024-12-07 11:50:05.830652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.705 [2024-12-07 11:50:05.831299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.705 [2024-12-07 11:50:05.831323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.705 [2024-12-07 11:50:05.831334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.705 [2024-12-07 11:50:05.831570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.705 [2024-12-07 11:50:05.831807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.705 [2024-12-07 11:50:05.831819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.705 [2024-12-07 11:50:05.831829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.705 [2024-12-07 11:50:05.831839] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.705 [2024-12-07 11:50:05.844722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.705 [2024-12-07 11:50:05.845332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.705 [2024-12-07 11:50:05.845356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.705 [2024-12-07 11:50:05.845367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.705 [2024-12-07 11:50:05.845605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.705 [2024-12-07 11:50:05.845842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.705 [2024-12-07 11:50:05.845854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.705 [2024-12-07 11:50:05.845864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.705 [2024-12-07 11:50:05.845874] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.705 [2024-12-07 11:50:05.858979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.705 [2024-12-07 11:50:05.859683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.705 [2024-12-07 11:50:05.859731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.705 [2024-12-07 11:50:05.859747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.705 [2024-12-07 11:50:05.860028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.705 [2024-12-07 11:50:05.860272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.705 [2024-12-07 11:50:05.860291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.705 [2024-12-07 11:50:05.860308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.705 [2024-12-07 11:50:05.860319] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.705 [2024-12-07 11:50:05.873195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.705 [2024-12-07 11:50:05.873821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.705 [2024-12-07 11:50:05.873846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.705 [2024-12-07 11:50:05.873858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.705 [2024-12-07 11:50:05.874103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.705 [2024-12-07 11:50:05.874343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.705 [2024-12-07 11:50:05.874356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.705 [2024-12-07 11:50:05.874378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.705 [2024-12-07 11:50:05.874388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.705 [2024-12-07 11:50:05.887260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.705 [2024-12-07 11:50:05.887706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.705 [2024-12-07 11:50:05.887730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.705 [2024-12-07 11:50:05.887741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.705 [2024-12-07 11:50:05.887978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.705 [2024-12-07 11:50:05.888222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.705 [2024-12-07 11:50:05.888236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.705 [2024-12-07 11:50:05.888246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.705 [2024-12-07 11:50:05.888255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.705 [2024-12-07 11:50:05.901387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.705 [2024-12-07 11:50:05.901940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.705 [2024-12-07 11:50:05.901962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.705 [2024-12-07 11:50:05.901973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.705 [2024-12-07 11:50:05.902218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.705 [2024-12-07 11:50:05.902455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.705 [2024-12-07 11:50:05.902468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.705 [2024-12-07 11:50:05.902478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.705 [2024-12-07 11:50:05.902491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.705 [2024-12-07 11:50:05.915571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.705 [2024-12-07 11:50:05.916079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.705 [2024-12-07 11:50:05.916103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.705 [2024-12-07 11:50:05.916115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.705 [2024-12-07 11:50:05.916352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.705 [2024-12-07 11:50:05.916589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.705 [2024-12-07 11:50:05.916601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.705 [2024-12-07 11:50:05.916611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.705 [2024-12-07 11:50:05.916621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.705 [2024-12-07 11:50:05.929731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.705 [2024-12-07 11:50:05.930359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.705 [2024-12-07 11:50:05.930382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.705 [2024-12-07 11:50:05.930394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.705 [2024-12-07 11:50:05.930631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.705 [2024-12-07 11:50:05.930868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.705 [2024-12-07 11:50:05.930880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.705 [2024-12-07 11:50:05.930890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.705 [2024-12-07 11:50:05.930899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.705 [2024-12-07 11:50:05.943759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.705 [2024-12-07 11:50:05.944437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.705 [2024-12-07 11:50:05.944485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.705 [2024-12-07 11:50:05.944501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.705 [2024-12-07 11:50:05.944771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.705 [2024-12-07 11:50:05.945022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.705 [2024-12-07 11:50:05.945036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.705 [2024-12-07 11:50:05.945047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.705 [2024-12-07 11:50:05.945059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.705 [2024-12-07 11:50:05.957921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.705 [2024-12-07 11:50:05.958646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.706 [2024-12-07 11:50:05.958693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.706 [2024-12-07 11:50:05.958708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.706 [2024-12-07 11:50:05.958978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.706 [2024-12-07 11:50:05.959230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.706 [2024-12-07 11:50:05.959245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.706 [2024-12-07 11:50:05.959257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.706 [2024-12-07 11:50:05.959269] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.706 [2024-12-07 11:50:05.972135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.706 [2024-12-07 11:50:05.972833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.706 [2024-12-07 11:50:05.972881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.706 [2024-12-07 11:50:05.972897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.706 [2024-12-07 11:50:05.973177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.706 [2024-12-07 11:50:05.973422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.706 [2024-12-07 11:50:05.973436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.706 [2024-12-07 11:50:05.973446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.706 [2024-12-07 11:50:05.973458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.706 [2024-12-07 11:50:05.986327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.706 [2024-12-07 11:50:05.986986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.706 [2024-12-07 11:50:05.987041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.706 [2024-12-07 11:50:05.987058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.706 [2024-12-07 11:50:05.987328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.706 [2024-12-07 11:50:05.987571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.706 [2024-12-07 11:50:05.987585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.706 [2024-12-07 11:50:05.987596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.706 [2024-12-07 11:50:05.987607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.706 [2024-12-07 11:50:06.000496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.706 [2024-12-07 11:50:06.001152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.706 [2024-12-07 11:50:06.001200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.706 [2024-12-07 11:50:06.001220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.706 [2024-12-07 11:50:06.001490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.706 [2024-12-07 11:50:06.001734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.706 [2024-12-07 11:50:06.001748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.706 [2024-12-07 11:50:06.001760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.706 [2024-12-07 11:50:06.001771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.706 [2024-12-07 11:50:06.014634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.706 [2024-12-07 11:50:06.015218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.706 [2024-12-07 11:50:06.015244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.706 [2024-12-07 11:50:06.015256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.706 [2024-12-07 11:50:06.015495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.706 [2024-12-07 11:50:06.015733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.706 [2024-12-07 11:50:06.015745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.706 [2024-12-07 11:50:06.015755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.706 [2024-12-07 11:50:06.015765] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.706 [2024-12-07 11:50:06.028886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.706 [2024-12-07 11:50:06.029487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.706 [2024-12-07 11:50:06.029533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.706 [2024-12-07 11:50:06.029548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.706 [2024-12-07 11:50:06.029818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.706 [2024-12-07 11:50:06.030073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.706 [2024-12-07 11:50:06.030088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.706 [2024-12-07 11:50:06.030099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.706 [2024-12-07 11:50:06.030111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.706 [2024-12-07 11:50:06.042968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.706 [2024-12-07 11:50:06.043640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.706 [2024-12-07 11:50:06.043687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.706 [2024-12-07 11:50:06.043703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.706 [2024-12-07 11:50:06.043973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.706 [2024-12-07 11:50:06.044232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.706 [2024-12-07 11:50:06.044247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.706 [2024-12-07 11:50:06.044258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.706 [2024-12-07 11:50:06.044270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.968 [2024-12-07 11:50:06.057126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.968 [2024-12-07 11:50:06.057807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.968 [2024-12-07 11:50:06.057855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.968 [2024-12-07 11:50:06.057870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.968 [2024-12-07 11:50:06.058151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.968 [2024-12-07 11:50:06.058395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.968 [2024-12-07 11:50:06.058409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.968 [2024-12-07 11:50:06.058420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.968 [2024-12-07 11:50:06.058432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.968 [2024-12-07 11:50:06.071308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.968 [2024-12-07 11:50:06.072036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.968 [2024-12-07 11:50:06.072083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.968 [2024-12-07 11:50:06.072101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.968 [2024-12-07 11:50:06.072373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.969 [2024-12-07 11:50:06.072616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.969 [2024-12-07 11:50:06.072630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.969 [2024-12-07 11:50:06.072641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.969 [2024-12-07 11:50:06.072653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.969 [2024-12-07 11:50:06.085529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.969 [2024-12-07 11:50:06.086152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.969 [2024-12-07 11:50:06.086199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.969 [2024-12-07 11:50:06.086217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.969 [2024-12-07 11:50:06.086487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.969 [2024-12-07 11:50:06.086730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.969 [2024-12-07 11:50:06.086745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.969 [2024-12-07 11:50:06.086761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.969 [2024-12-07 11:50:06.086773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.969 5622.50 IOPS, 21.96 MiB/s [2024-12-07T10:50:06.323Z] [2024-12-07 11:50:06.099627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.969 [2024-12-07 11:50:06.100337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.969 [2024-12-07 11:50:06.100384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.969 [2024-12-07 11:50:06.100400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.969 [2024-12-07 11:50:06.100670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.969 [2024-12-07 11:50:06.100913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.969 [2024-12-07 11:50:06.100927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.969 [2024-12-07 11:50:06.100938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.969 [2024-12-07 11:50:06.100950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.969 [2024-12-07 11:50:06.113833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.969 [2024-12-07 11:50:06.114564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.969 [2024-12-07 11:50:06.114611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.969 [2024-12-07 11:50:06.114627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.969 [2024-12-07 11:50:06.114897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.969 [2024-12-07 11:50:06.115151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.969 [2024-12-07 11:50:06.115166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.969 [2024-12-07 11:50:06.115177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.969 [2024-12-07 11:50:06.115189] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.969 [2024-12-07 11:50:06.128076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.969 [2024-12-07 11:50:06.128788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.969 [2024-12-07 11:50:06.128835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.969 [2024-12-07 11:50:06.128851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.969 [2024-12-07 11:50:06.129131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.969 [2024-12-07 11:50:06.129376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.969 [2024-12-07 11:50:06.129390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.969 [2024-12-07 11:50:06.129401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.969 [2024-12-07 11:50:06.129413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.969 [2024-12-07 11:50:06.142292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.969 [2024-12-07 11:50:06.142866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.969 [2024-12-07 11:50:06.142892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.969 [2024-12-07 11:50:06.142904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.969 [2024-12-07 11:50:06.143147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.969 [2024-12-07 11:50:06.143385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.969 [2024-12-07 11:50:06.143399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.969 [2024-12-07 11:50:06.143409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.969 [2024-12-07 11:50:06.143419] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.969 [2024-12-07 11:50:06.156484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.969 [2024-12-07 11:50:06.156946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.969 [2024-12-07 11:50:06.156970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.969 [2024-12-07 11:50:06.156981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.969 [2024-12-07 11:50:06.157225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.969 [2024-12-07 11:50:06.157463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.969 [2024-12-07 11:50:06.157475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.969 [2024-12-07 11:50:06.157485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.969 [2024-12-07 11:50:06.157494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.969 [2024-12-07 11:50:06.170560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.969 [2024-12-07 11:50:06.171241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.969 [2024-12-07 11:50:06.171289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.969 [2024-12-07 11:50:06.171305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.969 [2024-12-07 11:50:06.171575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.969 [2024-12-07 11:50:06.171818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.969 [2024-12-07 11:50:06.171832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.969 [2024-12-07 11:50:06.171843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.969 [2024-12-07 11:50:06.171854] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.969 [2024-12-07 11:50:06.184734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.969 [2024-12-07 11:50:06.185464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.969 [2024-12-07 11:50:06.185515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.969 [2024-12-07 11:50:06.185531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.969 [2024-12-07 11:50:06.185801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.969 [2024-12-07 11:50:06.186055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.969 [2024-12-07 11:50:06.186070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.969 [2024-12-07 11:50:06.186081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.969 [2024-12-07 11:50:06.186093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.969 [2024-12-07 11:50:06.198973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.969 [2024-12-07 11:50:06.199644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.969 [2024-12-07 11:50:06.199692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.969 [2024-12-07 11:50:06.199708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.969 [2024-12-07 11:50:06.199978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.969 [2024-12-07 11:50:06.200232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.969 [2024-12-07 11:50:06.200247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.969 [2024-12-07 11:50:06.200258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.969 [2024-12-07 11:50:06.200270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.969 [2024-12-07 11:50:06.213123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.969 [2024-12-07 11:50:06.213811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.969 [2024-12-07 11:50:06.213858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.970 [2024-12-07 11:50:06.213874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.970 [2024-12-07 11:50:06.214155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.970 [2024-12-07 11:50:06.214399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.970 [2024-12-07 11:50:06.214413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.970 [2024-12-07 11:50:06.214424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.970 [2024-12-07 11:50:06.214436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.970 [2024-12-07 11:50:06.227313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.970 [2024-12-07 11:50:06.228018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.970 [2024-12-07 11:50:06.228066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.970 [2024-12-07 11:50:06.228082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.970 [2024-12-07 11:50:06.228356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.970 [2024-12-07 11:50:06.228605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.970 [2024-12-07 11:50:06.228621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.970 [2024-12-07 11:50:06.228632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.970 [2024-12-07 11:50:06.228644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.970 [2024-12-07 11:50:06.241501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.970 [2024-12-07 11:50:06.242137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.970 [2024-12-07 11:50:06.242184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.970 [2024-12-07 11:50:06.242200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.970 [2024-12-07 11:50:06.242470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.970 [2024-12-07 11:50:06.242713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.970 [2024-12-07 11:50:06.242727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.970 [2024-12-07 11:50:06.242738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.970 [2024-12-07 11:50:06.242750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.970 [2024-12-07 11:50:06.255612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.970 [2024-12-07 11:50:06.256312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.970 [2024-12-07 11:50:06.256360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.970 [2024-12-07 11:50:06.256376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.970 [2024-12-07 11:50:06.256646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.970 [2024-12-07 11:50:06.256889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.970 [2024-12-07 11:50:06.256903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.970 [2024-12-07 11:50:06.256914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.970 [2024-12-07 11:50:06.256926] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.970 [2024-12-07 11:50:06.269804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.970 [2024-12-07 11:50:06.270432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.970 [2024-12-07 11:50:06.270458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.970 [2024-12-07 11:50:06.270470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.970 [2024-12-07 11:50:06.270710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.970 [2024-12-07 11:50:06.270952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.970 [2024-12-07 11:50:06.270965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.970 [2024-12-07 11:50:06.270975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.970 [2024-12-07 11:50:06.270985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.970 [2024-12-07 11:50:06.284054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.970 [2024-12-07 11:50:06.284748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.970 [2024-12-07 11:50:06.284802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.970 [2024-12-07 11:50:06.284818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.970 [2024-12-07 11:50:06.285097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.970 [2024-12-07 11:50:06.285341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.970 [2024-12-07 11:50:06.285355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.970 [2024-12-07 11:50:06.285366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.970 [2024-12-07 11:50:06.285377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.970 [2024-12-07 11:50:06.298259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.970 [2024-12-07 11:50:06.298845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.970 [2024-12-07 11:50:06.298892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.970 [2024-12-07 11:50:06.298908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.970 [2024-12-07 11:50:06.299187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.970 [2024-12-07 11:50:06.299430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.970 [2024-12-07 11:50:06.299444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.970 [2024-12-07 11:50:06.299456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.970 [2024-12-07 11:50:06.299467] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:06.970 [2024-12-07 11:50:06.312327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:06.970 [2024-12-07 11:50:06.312964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.970 [2024-12-07 11:50:06.313020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:06.970 [2024-12-07 11:50:06.313036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:06.970 [2024-12-07 11:50:06.313306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:06.970 [2024-12-07 11:50:06.313549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:06.970 [2024-12-07 11:50:06.313563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:06.970 [2024-12-07 11:50:06.313579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:06.970 [2024-12-07 11:50:06.313591] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.233 [2024-12-07 11:50:06.326470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.233 [2024-12-07 11:50:06.327049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.233 [2024-12-07 11:50:06.327096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.233 [2024-12-07 11:50:06.327112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.233 [2024-12-07 11:50:06.327381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.233 [2024-12-07 11:50:06.327625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.233 [2024-12-07 11:50:06.327639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.233 [2024-12-07 11:50:06.327650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.233 [2024-12-07 11:50:06.327662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.233 [2024-12-07 11:50:06.340533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.233 [2024-12-07 11:50:06.341253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.233 [2024-12-07 11:50:06.341301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.233 [2024-12-07 11:50:06.341317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.233 [2024-12-07 11:50:06.341587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.233 [2024-12-07 11:50:06.341830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.233 [2024-12-07 11:50:06.341844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.233 [2024-12-07 11:50:06.341856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.233 [2024-12-07 11:50:06.341867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.233 [2024-12-07 11:50:06.354743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.233 [2024-12-07 11:50:06.355336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.233 [2024-12-07 11:50:06.355383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.233 [2024-12-07 11:50:06.355399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.233 [2024-12-07 11:50:06.355670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.233 [2024-12-07 11:50:06.355914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.233 [2024-12-07 11:50:06.355929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.233 [2024-12-07 11:50:06.355941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.233 [2024-12-07 11:50:06.355953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.233 [2024-12-07 11:50:06.368813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.233 [2024-12-07 11:50:06.369513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.233 [2024-12-07 11:50:06.369561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.233 [2024-12-07 11:50:06.369577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.233 [2024-12-07 11:50:06.369847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.233 [2024-12-07 11:50:06.370099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.233 [2024-12-07 11:50:06.370114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.233 [2024-12-07 11:50:06.370125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.233 [2024-12-07 11:50:06.370137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.233 [2024-12-07 11:50:06.382997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.233 [2024-12-07 11:50:06.383707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.233 [2024-12-07 11:50:06.383753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.233 [2024-12-07 11:50:06.383769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.233 [2024-12-07 11:50:06.384049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.233 [2024-12-07 11:50:06.384293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.233 [2024-12-07 11:50:06.384307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.233 [2024-12-07 11:50:06.384319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.233 [2024-12-07 11:50:06.384330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.233 [2024-12-07 11:50:06.397214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.233 [2024-12-07 11:50:06.397905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.233 [2024-12-07 11:50:06.397953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.233 [2024-12-07 11:50:06.397968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.233 [2024-12-07 11:50:06.398249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.233 [2024-12-07 11:50:06.398494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.233 [2024-12-07 11:50:06.398508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.233 [2024-12-07 11:50:06.398520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.233 [2024-12-07 11:50:06.398531] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.233 [2024-12-07 11:50:06.411263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.233 [2024-12-07 11:50:06.411978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.233 [2024-12-07 11:50:06.412040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.233 [2024-12-07 11:50:06.412057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.233 [2024-12-07 11:50:06.412327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.233 [2024-12-07 11:50:06.412570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.233 [2024-12-07 11:50:06.412584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.233 [2024-12-07 11:50:06.412595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.233 [2024-12-07 11:50:06.412607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.233 [2024-12-07 11:50:06.425491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.233 [2024-12-07 11:50:06.426138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.233 [2024-12-07 11:50:06.426186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.233 [2024-12-07 11:50:06.426203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.234 [2024-12-07 11:50:06.426476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.234 [2024-12-07 11:50:06.426719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.234 [2024-12-07 11:50:06.426734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.234 [2024-12-07 11:50:06.426745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.234 [2024-12-07 11:50:06.426757] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.234 [2024-12-07 11:50:06.439630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.234 [2024-12-07 11:50:06.440303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.234 [2024-12-07 11:50:06.440351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.234 [2024-12-07 11:50:06.440367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.234 [2024-12-07 11:50:06.440637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.234 [2024-12-07 11:50:06.440879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.234 [2024-12-07 11:50:06.440893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.234 [2024-12-07 11:50:06.440904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.234 [2024-12-07 11:50:06.440916] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.234 [2024-12-07 11:50:06.453774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.234 [2024-12-07 11:50:06.454420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.234 [2024-12-07 11:50:06.454446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.234 [2024-12-07 11:50:06.454458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.234 [2024-12-07 11:50:06.454700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.234 [2024-12-07 11:50:06.454938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.234 [2024-12-07 11:50:06.454951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.234 [2024-12-07 11:50:06.454960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.234 [2024-12-07 11:50:06.454970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.234 [2024-12-07 11:50:06.468034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.234 [2024-12-07 11:50:06.468729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.234 [2024-12-07 11:50:06.468776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.234 [2024-12-07 11:50:06.468792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.234 [2024-12-07 11:50:06.469072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.234 [2024-12-07 11:50:06.469316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.234 [2024-12-07 11:50:06.469330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.234 [2024-12-07 11:50:06.469341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.234 [2024-12-07 11:50:06.469353] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.234 [2024-12-07 11:50:06.482208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.234 [2024-12-07 11:50:06.482912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.234 [2024-12-07 11:50:06.482960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.234 [2024-12-07 11:50:06.482976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.234 [2024-12-07 11:50:06.483262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.234 [2024-12-07 11:50:06.483505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.234 [2024-12-07 11:50:06.483519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.234 [2024-12-07 11:50:06.483530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.234 [2024-12-07 11:50:06.483541] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.234 [2024-12-07 11:50:06.496416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.234 [2024-12-07 11:50:06.497057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.234 [2024-12-07 11:50:06.497089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.234 [2024-12-07 11:50:06.497102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.234 [2024-12-07 11:50:06.497348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.234 [2024-12-07 11:50:06.497585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.234 [2024-12-07 11:50:06.497603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.234 [2024-12-07 11:50:06.497613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.234 [2024-12-07 11:50:06.497624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.234 [2024-12-07 11:50:06.510473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.234 [2024-12-07 11:50:06.511114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.234 [2024-12-07 11:50:06.511162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.234 [2024-12-07 11:50:06.511179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.234 [2024-12-07 11:50:06.511450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.234 [2024-12-07 11:50:06.511693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.234 [2024-12-07 11:50:06.511707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.234 [2024-12-07 11:50:06.511718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.234 [2024-12-07 11:50:06.511730] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.234 [2024-12-07 11:50:06.524616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.234 [2024-12-07 11:50:06.525134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.234 [2024-12-07 11:50:06.525181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.234 [2024-12-07 11:50:06.525198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.234 [2024-12-07 11:50:06.525469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.234 [2024-12-07 11:50:06.525712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.234 [2024-12-07 11:50:06.525726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.234 [2024-12-07 11:50:06.525737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.234 [2024-12-07 11:50:06.525748] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.234 [2024-12-07 11:50:06.538840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.234 [2024-12-07 11:50:06.539514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.234 [2024-12-07 11:50:06.539562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.234 [2024-12-07 11:50:06.539578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.234 [2024-12-07 11:50:06.539848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.234 [2024-12-07 11:50:06.540102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.234 [2024-12-07 11:50:06.540117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.234 [2024-12-07 11:50:06.540128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.234 [2024-12-07 11:50:06.540144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.234 [2024-12-07 11:50:06.552997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.234 [2024-12-07 11:50:06.553706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.234 [2024-12-07 11:50:06.553753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.234 [2024-12-07 11:50:06.553769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.234 [2024-12-07 11:50:06.554050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.234 [2024-12-07 11:50:06.554295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.234 [2024-12-07 11:50:06.554309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.234 [2024-12-07 11:50:06.554321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.234 [2024-12-07 11:50:06.554332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.234 [2024-12-07 11:50:06.567202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.234 [2024-12-07 11:50:06.567890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.234 [2024-12-07 11:50:06.567937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.235 [2024-12-07 11:50:06.567952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.235 [2024-12-07 11:50:06.568232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.235 [2024-12-07 11:50:06.568476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.235 [2024-12-07 11:50:06.568490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.235 [2024-12-07 11:50:06.568501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.235 [2024-12-07 11:50:06.568513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.235 [2024-12-07 11:50:06.581371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.235 [2024-12-07 11:50:06.582091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.235 [2024-12-07 11:50:06.582138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.235 [2024-12-07 11:50:06.582156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.235 [2024-12-07 11:50:06.582428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.235 [2024-12-07 11:50:06.582672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.235 [2024-12-07 11:50:06.582685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.235 [2024-12-07 11:50:06.582697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.235 [2024-12-07 11:50:06.582708] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.497 [2024-12-07 11:50:06.595606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.497 [2024-12-07 11:50:06.596313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.497 [2024-12-07 11:50:06.596361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.497 [2024-12-07 11:50:06.596376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.497 [2024-12-07 11:50:06.596646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.497 [2024-12-07 11:50:06.596889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.497 [2024-12-07 11:50:06.596903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.497 [2024-12-07 11:50:06.596914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.497 [2024-12-07 11:50:06.596925] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.497 [2024-12-07 11:50:06.609786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.497 [2024-12-07 11:50:06.610388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.497 [2024-12-07 11:50:06.610435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.497 [2024-12-07 11:50:06.610452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.497 [2024-12-07 11:50:06.610722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.497 [2024-12-07 11:50:06.610965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.497 [2024-12-07 11:50:06.610979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.497 [2024-12-07 11:50:06.610990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.497 [2024-12-07 11:50:06.611002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.497 [2024-12-07 11:50:06.623889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.497 [2024-12-07 11:50:06.624557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.497 [2024-12-07 11:50:06.624605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.497 [2024-12-07 11:50:06.624620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.497 [2024-12-07 11:50:06.624890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.497 [2024-12-07 11:50:06.625144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.497 [2024-12-07 11:50:06.625159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.497 [2024-12-07 11:50:06.625170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.497 [2024-12-07 11:50:06.625182] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.497 [2024-12-07 11:50:06.638045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.497 [2024-12-07 11:50:06.638621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.497 [2024-12-07 11:50:06.638646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.497 [2024-12-07 11:50:06.638662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.497 [2024-12-07 11:50:06.638902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.497 [2024-12-07 11:50:06.639148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.497 [2024-12-07 11:50:06.639161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.497 [2024-12-07 11:50:06.639171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.497 [2024-12-07 11:50:06.639181] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.497 [2024-12-07 11:50:06.652238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.497 [2024-12-07 11:50:06.652795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.497 [2024-12-07 11:50:06.652841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.497 [2024-12-07 11:50:06.652858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.497 [2024-12-07 11:50:06.653141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.497 [2024-12-07 11:50:06.653386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.497 [2024-12-07 11:50:06.653400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.497 [2024-12-07 11:50:06.653411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.497 [2024-12-07 11:50:06.653422] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.497 [2024-12-07 11:50:06.666353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.497 [2024-12-07 11:50:06.666943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.497 [2024-12-07 11:50:06.666990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.497 [2024-12-07 11:50:06.667008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.497 [2024-12-07 11:50:06.667287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.497 [2024-12-07 11:50:06.667531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.497 [2024-12-07 11:50:06.667545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.497 [2024-12-07 11:50:06.667556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.497 [2024-12-07 11:50:06.667568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.497 [2024-12-07 11:50:06.680422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.497 [2024-12-07 11:50:06.681007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.497 [2024-12-07 11:50:06.681039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.498 [2024-12-07 11:50:06.681051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.498 [2024-12-07 11:50:06.681289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.498 [2024-12-07 11:50:06.681530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.498 [2024-12-07 11:50:06.681551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.498 [2024-12-07 11:50:06.681561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.498 [2024-12-07 11:50:06.681571] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.498 [2024-12-07 11:50:06.694658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.498 [2024-12-07 11:50:06.695343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.498 [2024-12-07 11:50:06.695390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.498 [2024-12-07 11:50:06.695406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.498 [2024-12-07 11:50:06.695676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.498 [2024-12-07 11:50:06.695918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.498 [2024-12-07 11:50:06.695932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.498 [2024-12-07 11:50:06.695944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.498 [2024-12-07 11:50:06.695956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.498 [2024-12-07 11:50:06.708818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.498 [2024-12-07 11:50:06.709516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.498 [2024-12-07 11:50:06.709564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.498 [2024-12-07 11:50:06.709580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.498 [2024-12-07 11:50:06.709850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.498 [2024-12-07 11:50:06.710104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.498 [2024-12-07 11:50:06.710119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.498 [2024-12-07 11:50:06.710130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.498 [2024-12-07 11:50:06.710142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.498 [2024-12-07 11:50:06.723016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.498 [2024-12-07 11:50:06.723705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.498 [2024-12-07 11:50:06.723752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.498 [2024-12-07 11:50:06.723768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.498 [2024-12-07 11:50:06.724049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.498 [2024-12-07 11:50:06.724293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.498 [2024-12-07 11:50:06.724307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.498 [2024-12-07 11:50:06.724323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.498 [2024-12-07 11:50:06.724335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.498 [2024-12-07 11:50:06.737240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.498 [2024-12-07 11:50:06.737946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.498 [2024-12-07 11:50:06.737993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.498 [2024-12-07 11:50:06.738009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.498 [2024-12-07 11:50:06.738289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.498 [2024-12-07 11:50:06.738533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.498 [2024-12-07 11:50:06.738546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.498 [2024-12-07 11:50:06.738558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.498 [2024-12-07 11:50:06.738569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.498 [2024-12-07 11:50:06.751427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.498 [2024-12-07 11:50:06.752032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.498 [2024-12-07 11:50:06.752080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.498 [2024-12-07 11:50:06.752097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.498 [2024-12-07 11:50:06.752367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.498 [2024-12-07 11:50:06.752609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.498 [2024-12-07 11:50:06.752623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.498 [2024-12-07 11:50:06.752634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.498 [2024-12-07 11:50:06.752645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.498 [2024-12-07 11:50:06.765523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.498 [2024-12-07 11:50:06.766295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.498 [2024-12-07 11:50:06.766342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.498 [2024-12-07 11:50:06.766357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.498 [2024-12-07 11:50:06.766628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.498 [2024-12-07 11:50:06.766870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.498 [2024-12-07 11:50:06.766884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.498 [2024-12-07 11:50:06.766896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.498 [2024-12-07 11:50:06.766912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.498 [2024-12-07 11:50:06.779773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.498 [2024-12-07 11:50:06.780449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.498 [2024-12-07 11:50:06.780496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.498 [2024-12-07 11:50:06.780512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.498 [2024-12-07 11:50:06.780782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.498 [2024-12-07 11:50:06.781034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.498 [2024-12-07 11:50:06.781049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.498 [2024-12-07 11:50:06.781060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.498 [2024-12-07 11:50:06.781072] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.498 [2024-12-07 11:50:06.793942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.498 [2024-12-07 11:50:06.794520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.499 [2024-12-07 11:50:06.794545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.499 [2024-12-07 11:50:06.794558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.499 [2024-12-07 11:50:06.794795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.499 [2024-12-07 11:50:06.795057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.499 [2024-12-07 11:50:06.795072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.499 [2024-12-07 11:50:06.795082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.499 [2024-12-07 11:50:06.795092] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.499 [2024-12-07 11:50:06.808179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.499 [2024-12-07 11:50:06.808876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.499 [2024-12-07 11:50:06.808923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.499 [2024-12-07 11:50:06.808939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.499 [2024-12-07 11:50:06.809219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.499 [2024-12-07 11:50:06.809463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.499 [2024-12-07 11:50:06.809477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.499 [2024-12-07 11:50:06.809489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.499 [2024-12-07 11:50:06.809501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.499 [2024-12-07 11:50:06.822370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.499 [2024-12-07 11:50:06.823124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.499 [2024-12-07 11:50:06.823171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.499 [2024-12-07 11:50:06.823187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.499 [2024-12-07 11:50:06.823457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.499 [2024-12-07 11:50:06.823709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.499 [2024-12-07 11:50:06.823726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.499 [2024-12-07 11:50:06.823737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.499 [2024-12-07 11:50:06.823749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.499 [2024-12-07 11:50:06.836429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.499 [2024-12-07 11:50:06.837094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.499 [2024-12-07 11:50:06.837141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.499 [2024-12-07 11:50:06.837159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.499 [2024-12-07 11:50:06.837431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.499 [2024-12-07 11:50:06.837673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.499 [2024-12-07 11:50:06.837688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.499 [2024-12-07 11:50:06.837700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.499 [2024-12-07 11:50:06.837712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.761 [2024-12-07 11:50:06.850579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.761 [2024-12-07 11:50:06.851292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.761 [2024-12-07 11:50:06.851339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.761 [2024-12-07 11:50:06.851355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.761 [2024-12-07 11:50:06.851625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.761 [2024-12-07 11:50:06.851868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.761 [2024-12-07 11:50:06.851883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.761 [2024-12-07 11:50:06.851894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.761 [2024-12-07 11:50:06.851905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.761 [2024-12-07 11:50:06.864768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.761 [2024-12-07 11:50:06.865348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.761 [2024-12-07 11:50:06.865395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.761 [2024-12-07 11:50:06.865415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.761 [2024-12-07 11:50:06.865687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.761 [2024-12-07 11:50:06.865929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.761 [2024-12-07 11:50:06.865943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.761 [2024-12-07 11:50:06.865955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.761 [2024-12-07 11:50:06.865966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.761 [2024-12-07 11:50:06.878830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.761 [2024-12-07 11:50:06.879412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.761 [2024-12-07 11:50:06.879438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.761 [2024-12-07 11:50:06.879450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.761 [2024-12-07 11:50:06.879688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.761 [2024-12-07 11:50:06.879926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.761 [2024-12-07 11:50:06.879939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.761 [2024-12-07 11:50:06.879949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.761 [2024-12-07 11:50:06.879966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.761 [2024-12-07 11:50:06.893058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.761 [2024-12-07 11:50:06.893508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.761 [2024-12-07 11:50:06.893532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.761 [2024-12-07 11:50:06.893544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.761 [2024-12-07 11:50:06.893781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.761 [2024-12-07 11:50:06.894024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.761 [2024-12-07 11:50:06.894038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.761 [2024-12-07 11:50:06.894048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.761 [2024-12-07 11:50:06.894058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.761 [2024-12-07 11:50:06.907158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.761 [2024-12-07 11:50:06.907830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.761 [2024-12-07 11:50:06.907877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.761 [2024-12-07 11:50:06.907894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.761 [2024-12-07 11:50:06.908176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.761 [2024-12-07 11:50:06.908425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.761 [2024-12-07 11:50:06.908439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.761 [2024-12-07 11:50:06.908450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.761 [2024-12-07 11:50:06.908462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.761 [2024-12-07 11:50:06.921323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.761 [2024-12-07 11:50:06.922043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.761 [2024-12-07 11:50:06.922090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.761 [2024-12-07 11:50:06.922105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.761 [2024-12-07 11:50:06.922375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.761 [2024-12-07 11:50:06.922618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.761 [2024-12-07 11:50:06.922632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.761 [2024-12-07 11:50:06.922643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.761 [2024-12-07 11:50:06.922654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.761 [2024-12-07 11:50:06.935559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.762 [2024-12-07 11:50:06.936326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.762 [2024-12-07 11:50:06.936374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.762 [2024-12-07 11:50:06.936390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.762 [2024-12-07 11:50:06.936661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.762 [2024-12-07 11:50:06.936904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.762 [2024-12-07 11:50:06.936918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.762 [2024-12-07 11:50:06.936929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.762 [2024-12-07 11:50:06.936941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.762 [2024-12-07 11:50:06.949604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.762 [2024-12-07 11:50:06.950302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.762 [2024-12-07 11:50:06.950350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.762 [2024-12-07 11:50:06.950365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.762 [2024-12-07 11:50:06.950635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.762 [2024-12-07 11:50:06.950878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.762 [2024-12-07 11:50:06.950892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.762 [2024-12-07 11:50:06.950908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.762 [2024-12-07 11:50:06.950919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.762 [2024-12-07 11:50:06.963783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.762 [2024-12-07 11:50:06.964502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.762 [2024-12-07 11:50:06.964550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.762 [2024-12-07 11:50:06.964565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.762 [2024-12-07 11:50:06.964836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.762 [2024-12-07 11:50:06.965089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.762 [2024-12-07 11:50:06.965104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.762 [2024-12-07 11:50:06.965116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.762 [2024-12-07 11:50:06.965128] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.762 [2024-12-07 11:50:06.977992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.762 [2024-12-07 11:50:06.978704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.762 [2024-12-07 11:50:06.978752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.762 [2024-12-07 11:50:06.978770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.762 [2024-12-07 11:50:06.979047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.762 [2024-12-07 11:50:06.979290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.762 [2024-12-07 11:50:06.979304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.762 [2024-12-07 11:50:06.979315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.762 [2024-12-07 11:50:06.979327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.762 [2024-12-07 11:50:06.992215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.762 [2024-12-07 11:50:06.992804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.762 [2024-12-07 11:50:06.992851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.762 [2024-12-07 11:50:06.992866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.762 [2024-12-07 11:50:06.993146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.762 [2024-12-07 11:50:06.993391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.762 [2024-12-07 11:50:06.993405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.762 [2024-12-07 11:50:06.993416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.762 [2024-12-07 11:50:06.993427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.762 [2024-12-07 11:50:07.006520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.762 [2024-12-07 11:50:07.007141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.762 [2024-12-07 11:50:07.007189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.762 [2024-12-07 11:50:07.007205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.762 [2024-12-07 11:50:07.007475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.762 [2024-12-07 11:50:07.007719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.762 [2024-12-07 11:50:07.007732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.762 [2024-12-07 11:50:07.007743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.762 [2024-12-07 11:50:07.007755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.762 [2024-12-07 11:50:07.020621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.762 [2024-12-07 11:50:07.021341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.762 [2024-12-07 11:50:07.021389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.762 [2024-12-07 11:50:07.021405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.762 [2024-12-07 11:50:07.021675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.762 [2024-12-07 11:50:07.021918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.762 [2024-12-07 11:50:07.021934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.762 [2024-12-07 11:50:07.021946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.762 [2024-12-07 11:50:07.021958] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.762 [2024-12-07 11:50:07.034859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.762 [2024-12-07 11:50:07.035530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.762 [2024-12-07 11:50:07.035577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.762 [2024-12-07 11:50:07.035593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.762 [2024-12-07 11:50:07.035864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.762 [2024-12-07 11:50:07.036117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.762 [2024-12-07 11:50:07.036132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.762 [2024-12-07 11:50:07.036144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.762 [2024-12-07 11:50:07.036156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.762 [2024-12-07 11:50:07.049016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.762 [2024-12-07 11:50:07.049650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.762 [2024-12-07 11:50:07.049680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.762 [2024-12-07 11:50:07.049692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.762 [2024-12-07 11:50:07.049955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.762 [2024-12-07 11:50:07.050199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.762 [2024-12-07 11:50:07.050213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.762 [2024-12-07 11:50:07.050223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.762 [2024-12-07 11:50:07.050233] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.762 [2024-12-07 11:50:07.063090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.762 [2024-12-07 11:50:07.063780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.762 [2024-12-07 11:50:07.063827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.762 [2024-12-07 11:50:07.063843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.762 [2024-12-07 11:50:07.064122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.762 [2024-12-07 11:50:07.064365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.762 [2024-12-07 11:50:07.064379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.762 [2024-12-07 11:50:07.064390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.763 [2024-12-07 11:50:07.064402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.763 [2024-12-07 11:50:07.077269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.763 [2024-12-07 11:50:07.077966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.763 [2024-12-07 11:50:07.078021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.763 [2024-12-07 11:50:07.078039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.763 [2024-12-07 11:50:07.078310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.763 [2024-12-07 11:50:07.078553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.763 [2024-12-07 11:50:07.078566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.763 [2024-12-07 11:50:07.078577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.763 [2024-12-07 11:50:07.078589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.763 [2024-12-07 11:50:07.091462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.763 [2024-12-07 11:50:07.092143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.763 [2024-12-07 11:50:07.092169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.763 [2024-12-07 11:50:07.092181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.763 [2024-12-07 11:50:07.092423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.763 [2024-12-07 11:50:07.092661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.763 [2024-12-07 11:50:07.092674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.763 [2024-12-07 11:50:07.092684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.763 [2024-12-07 11:50:07.092694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.763 4498.00 IOPS, 17.57 MiB/s [2024-12-07T10:50:07.117Z] [2024-12-07 11:50:07.105517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.763 [2024-12-07 11:50:07.106071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.763 [2024-12-07 11:50:07.106095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:07.763 [2024-12-07 11:50:07.106106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:07.763 [2024-12-07 11:50:07.106344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:07.763 [2024-12-07 11:50:07.106581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.763 [2024-12-07 11:50:07.106594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.763 [2024-12-07 11:50:07.106604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.763 [2024-12-07 11:50:07.106613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.026 [2024-12-07 11:50:07.119683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.026 [2024-12-07 11:50:07.120092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.026 [2024-12-07 11:50:07.120115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.026 [2024-12-07 11:50:07.120127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.026 [2024-12-07 11:50:07.120363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.026 [2024-12-07 11:50:07.120600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.026 [2024-12-07 11:50:07.120612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.026 [2024-12-07 11:50:07.120623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.026 [2024-12-07 11:50:07.120632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.026 [2024-12-07 11:50:07.133730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.026 [2024-12-07 11:50:07.134406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.026 [2024-12-07 11:50:07.134453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.026 [2024-12-07 11:50:07.134471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.026 [2024-12-07 11:50:07.134741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.026 [2024-12-07 11:50:07.134989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.026 [2024-12-07 11:50:07.135003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.026 [2024-12-07 11:50:07.135027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.026 [2024-12-07 11:50:07.135039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.026 [2024-12-07 11:50:07.147907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.026 [2024-12-07 11:50:07.148592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.026 [2024-12-07 11:50:07.148639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.026 [2024-12-07 11:50:07.148655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.026 [2024-12-07 11:50:07.148925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.026 [2024-12-07 11:50:07.149177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.026 [2024-12-07 11:50:07.149192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.026 [2024-12-07 11:50:07.149203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.026 [2024-12-07 11:50:07.149215] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.026 [2024-12-07 11:50:07.162081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.026 [2024-12-07 11:50:07.162748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.026 [2024-12-07 11:50:07.162795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.026 [2024-12-07 11:50:07.162813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.026 [2024-12-07 11:50:07.163090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.026 [2024-12-07 11:50:07.163334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.026 [2024-12-07 11:50:07.163348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.026 [2024-12-07 11:50:07.163360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.026 [2024-12-07 11:50:07.163372] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.026 [2024-12-07 11:50:07.176236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.026 [2024-12-07 11:50:07.176934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.027 [2024-12-07 11:50:07.176982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.027 [2024-12-07 11:50:07.176999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.027 [2024-12-07 11:50:07.177277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.027 [2024-12-07 11:50:07.177520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.027 [2024-12-07 11:50:07.177534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.027 [2024-12-07 11:50:07.177550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.027 [2024-12-07 11:50:07.177562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.027 [2024-12-07 11:50:07.190422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.027 [2024-12-07 11:50:07.191131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.027 [2024-12-07 11:50:07.191179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.027 [2024-12-07 11:50:07.191196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.027 [2024-12-07 11:50:07.191467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.027 [2024-12-07 11:50:07.191710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.027 [2024-12-07 11:50:07.191724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.027 [2024-12-07 11:50:07.191736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.027 [2024-12-07 11:50:07.191747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.027 [2024-12-07 11:50:07.204644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.027 [2024-12-07 11:50:07.205105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.027 [2024-12-07 11:50:07.205136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.027 [2024-12-07 11:50:07.205148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.027 [2024-12-07 11:50:07.205389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.027 [2024-12-07 11:50:07.205627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.027 [2024-12-07 11:50:07.205639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.027 [2024-12-07 11:50:07.205649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.027 [2024-12-07 11:50:07.205659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.027 [2024-12-07 11:50:07.218734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.027 [2024-12-07 11:50:07.219400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.027 [2024-12-07 11:50:07.219448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.027 [2024-12-07 11:50:07.219463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.027 [2024-12-07 11:50:07.219733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.027 [2024-12-07 11:50:07.219976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.027 [2024-12-07 11:50:07.219990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.027 [2024-12-07 11:50:07.220002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.027 [2024-12-07 11:50:07.220022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.027 [2024-12-07 11:50:07.232928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.027 [2024-12-07 11:50:07.233598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.027 [2024-12-07 11:50:07.233644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.027 [2024-12-07 11:50:07.233660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.027 [2024-12-07 11:50:07.233930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.027 [2024-12-07 11:50:07.234183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.027 [2024-12-07 11:50:07.234199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.027 [2024-12-07 11:50:07.234210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.027 [2024-12-07 11:50:07.234222] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.027 [2024-12-07 11:50:07.247104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.027 [2024-12-07 11:50:07.247682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.027 [2024-12-07 11:50:07.247708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.027 [2024-12-07 11:50:07.247720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.027 [2024-12-07 11:50:07.247958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.027 [2024-12-07 11:50:07.248202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.027 [2024-12-07 11:50:07.248216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.027 [2024-12-07 11:50:07.248226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.027 [2024-12-07 11:50:07.248235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.027 [2024-12-07 11:50:07.261304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.027 [2024-12-07 11:50:07.261787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.027 [2024-12-07 11:50:07.261810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.027 [2024-12-07 11:50:07.261822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.027 [2024-12-07 11:50:07.262064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.027 [2024-12-07 11:50:07.262302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.027 [2024-12-07 11:50:07.262314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.027 [2024-12-07 11:50:07.262324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.027 [2024-12-07 11:50:07.262334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.027 [2024-12-07 11:50:07.275408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.027 [2024-12-07 11:50:07.275891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.027 [2024-12-07 11:50:07.275914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.027 [2024-12-07 11:50:07.275930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.027 [2024-12-07 11:50:07.276174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.027 [2024-12-07 11:50:07.276413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.027 [2024-12-07 11:50:07.276425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.027 [2024-12-07 11:50:07.276435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.027 [2024-12-07 11:50:07.276444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.027 [2024-12-07 11:50:07.289547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.027 [2024-12-07 11:50:07.290159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.027 [2024-12-07 11:50:07.290207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.027 [2024-12-07 11:50:07.290231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.027 [2024-12-07 11:50:07.290502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.027 [2024-12-07 11:50:07.290744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.027 [2024-12-07 11:50:07.290759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.027 [2024-12-07 11:50:07.290770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.027 [2024-12-07 11:50:07.290781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.027 [2024-12-07 11:50:07.303683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.027 [2024-12-07 11:50:07.304274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.027 [2024-12-07 11:50:07.304300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.027 [2024-12-07 11:50:07.304312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.027 [2024-12-07 11:50:07.304550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.027 [2024-12-07 11:50:07.304788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.027 [2024-12-07 11:50:07.304801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.027 [2024-12-07 11:50:07.304812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.028 [2024-12-07 11:50:07.304822] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.028 [2024-12-07 11:50:07.317921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.028 [2024-12-07 11:50:07.318531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.028 [2024-12-07 11:50:07.318555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.028 [2024-12-07 11:50:07.318566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.028 [2024-12-07 11:50:07.318810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.028 [2024-12-07 11:50:07.319054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.028 [2024-12-07 11:50:07.319068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.028 [2024-12-07 11:50:07.319078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.028 [2024-12-07 11:50:07.319088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.028 [2024-12-07 11:50:07.331984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.028 [2024-12-07 11:50:07.332457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.028 [2024-12-07 11:50:07.332480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.028 [2024-12-07 11:50:07.332492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.028 [2024-12-07 11:50:07.332730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2781522 Killed "${NVMF_APP[@]}" "$@" 00:38:08.028 [2024-12-07 11:50:07.332970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.028 [2024-12-07 11:50:07.332983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.028 [2024-12-07 11:50:07.332994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.028 [2024-12-07 11:50:07.333004] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2783389 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2783389 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2783389 ']' 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:08.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:08.028 11:50:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.028 [2024-12-07 11:50:07.346110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.028 [2024-12-07 11:50:07.346669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.028 [2024-12-07 11:50:07.346718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.028 [2024-12-07 11:50:07.346739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.028 [2024-12-07 11:50:07.347034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.028 [2024-12-07 11:50:07.347280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.028 [2024-12-07 11:50:07.347295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.028 [2024-12-07 11:50:07.347306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.028 [2024-12-07 11:50:07.347318] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.028 [2024-12-07 11:50:07.360221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.028 [2024-12-07 11:50:07.360799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.028 [2024-12-07 11:50:07.360825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.028 [2024-12-07 11:50:07.360838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.028 [2024-12-07 11:50:07.361084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.028 [2024-12-07 11:50:07.361323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.028 [2024-12-07 11:50:07.361336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.028 [2024-12-07 11:50:07.361346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.028 [2024-12-07 11:50:07.361357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.028 [2024-12-07 11:50:07.374268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.028 [2024-12-07 11:50:07.374879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.028 [2024-12-07 11:50:07.374904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.028 [2024-12-07 11:50:07.374916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.028 [2024-12-07 11:50:07.375161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.028 [2024-12-07 11:50:07.375399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.028 [2024-12-07 11:50:07.375413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.028 [2024-12-07 11:50:07.375423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.028 [2024-12-07 11:50:07.375433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.290 [2024-12-07 11:50:07.388304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.290 [2024-12-07 11:50:07.388880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.290 [2024-12-07 11:50:07.388904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.290 [2024-12-07 11:50:07.388915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.290 [2024-12-07 11:50:07.389161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.290 [2024-12-07 11:50:07.389404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.290 [2024-12-07 11:50:07.389417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.290 [2024-12-07 11:50:07.389427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.290 [2024-12-07 11:50:07.389438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.290 [2024-12-07 11:50:07.402353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.290 [2024-12-07 11:50:07.402973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.290 [2024-12-07 11:50:07.402996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.290 [2024-12-07 11:50:07.403009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.290 [2024-12-07 11:50:07.403256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.290 [2024-12-07 11:50:07.403495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.291 [2024-12-07 11:50:07.403507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.291 [2024-12-07 11:50:07.403518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.291 [2024-12-07 11:50:07.403528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.291 [2024-12-07 11:50:07.416416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.291 [2024-12-07 11:50:07.417000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.291 [2024-12-07 11:50:07.417032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.291 [2024-12-07 11:50:07.417044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.291 [2024-12-07 11:50:07.417283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.291 [2024-12-07 11:50:07.417523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.291 [2024-12-07 11:50:07.417537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.291 [2024-12-07 11:50:07.417547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.291 [2024-12-07 11:50:07.417558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.291 [2024-12-07 11:50:07.426256] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:38:08.291 [2024-12-07 11:50:07.426353] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:08.291 [2024-12-07 11:50:07.430608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.291 [2024-12-07 11:50:07.431365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.291 [2024-12-07 11:50:07.431414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.291 [2024-12-07 11:50:07.431432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.291 [2024-12-07 11:50:07.431715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.291 [2024-12-07 11:50:07.431960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.291 [2024-12-07 11:50:07.431975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.291 [2024-12-07 11:50:07.431987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.291 [2024-12-07 11:50:07.431999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.291 [2024-12-07 11:50:07.444673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.291 [2024-12-07 11:50:07.445313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.291 [2024-12-07 11:50:07.445340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.291 [2024-12-07 11:50:07.445353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.291 [2024-12-07 11:50:07.445592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.291 [2024-12-07 11:50:07.445831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.291 [2024-12-07 11:50:07.445844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.291 [2024-12-07 11:50:07.445855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.291 [2024-12-07 11:50:07.445864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.291 [2024-12-07 11:50:07.458754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.291 [2024-12-07 11:50:07.459438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.291 [2024-12-07 11:50:07.459462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.291 [2024-12-07 11:50:07.459474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.291 [2024-12-07 11:50:07.459714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.291 [2024-12-07 11:50:07.459954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.291 [2024-12-07 11:50:07.459967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.291 [2024-12-07 11:50:07.459978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.291 [2024-12-07 11:50:07.459988] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.291 [2024-12-07 11:50:07.472899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.291 [2024-12-07 11:50:07.473618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.291 [2024-12-07 11:50:07.473667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.291 [2024-12-07 11:50:07.473683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.291 [2024-12-07 11:50:07.473960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.291 [2024-12-07 11:50:07.474215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.291 [2024-12-07 11:50:07.474235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.291 [2024-12-07 11:50:07.474247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.291 [2024-12-07 11:50:07.474259] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.291 [2024-12-07 11:50:07.487146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.291 [2024-12-07 11:50:07.487744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.291 [2024-12-07 11:50:07.487770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.291 [2024-12-07 11:50:07.487782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.291 [2024-12-07 11:50:07.488030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.291 [2024-12-07 11:50:07.488279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.291 [2024-12-07 11:50:07.488293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.291 [2024-12-07 11:50:07.488303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.291 [2024-12-07 11:50:07.488314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.291 [2024-12-07 11:50:07.501430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.291 [2024-12-07 11:50:07.502060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.291 [2024-12-07 11:50:07.502092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.291 [2024-12-07 11:50:07.502104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.291 [2024-12-07 11:50:07.502352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.291 [2024-12-07 11:50:07.502591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.291 [2024-12-07 11:50:07.502605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.291 [2024-12-07 11:50:07.502616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.291 [2024-12-07 11:50:07.502626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.291 [2024-12-07 11:50:07.515508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.291 [2024-12-07 11:50:07.516128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.291 [2024-12-07 11:50:07.516152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.291 [2024-12-07 11:50:07.516164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.291 [2024-12-07 11:50:07.516402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.291 [2024-12-07 11:50:07.516640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.291 [2024-12-07 11:50:07.516653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.291 [2024-12-07 11:50:07.516663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.291 [2024-12-07 11:50:07.516677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.291 [2024-12-07 11:50:07.529577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.291 [2024-12-07 11:50:07.530147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.291 [2024-12-07 11:50:07.530170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.291 [2024-12-07 11:50:07.530181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.291 [2024-12-07 11:50:07.530420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.291 [2024-12-07 11:50:07.530658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.291 [2024-12-07 11:50:07.530671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.291 [2024-12-07 11:50:07.530682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.291 [2024-12-07 11:50:07.530692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.291 [2024-12-07 11:50:07.543801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.292 [2024-12-07 11:50:07.544475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.292 [2024-12-07 11:50:07.544522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.292 [2024-12-07 11:50:07.544539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.292 [2024-12-07 11:50:07.544812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.292 [2024-12-07 11:50:07.545065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.292 [2024-12-07 11:50:07.545080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.292 [2024-12-07 11:50:07.545092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.292 [2024-12-07 11:50:07.545105] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.292 [2024-12-07 11:50:07.557995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.292 [2024-12-07 11:50:07.558625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.292 [2024-12-07 11:50:07.558650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.292 [2024-12-07 11:50:07.558663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.292 [2024-12-07 11:50:07.558903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.292 [2024-12-07 11:50:07.559150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.292 [2024-12-07 11:50:07.559164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.292 [2024-12-07 11:50:07.559174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.292 [2024-12-07 11:50:07.559184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.292 [2024-12-07 11:50:07.571395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:08.292 [2024-12-07 11:50:07.572051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.292 [2024-12-07 11:50:07.572718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.292 [2024-12-07 11:50:07.572765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.292 [2024-12-07 11:50:07.572781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.292 [2024-12-07 11:50:07.573062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.292 [2024-12-07 11:50:07.573308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.292 [2024-12-07 11:50:07.573321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.292 [2024-12-07 11:50:07.573334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.292 [2024-12-07 11:50:07.573346] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.292 [2024-12-07 11:50:07.586233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.292 [2024-12-07 11:50:07.586758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.292 [2024-12-07 11:50:07.586784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.292 [2024-12-07 11:50:07.586796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.292 [2024-12-07 11:50:07.587040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.292 [2024-12-07 11:50:07.587280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.292 [2024-12-07 11:50:07.587293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.292 [2024-12-07 11:50:07.587303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.292 [2024-12-07 11:50:07.587314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.292 [2024-12-07 11:50:07.600425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.292 [2024-12-07 11:50:07.601031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.292 [2024-12-07 11:50:07.601056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.292 [2024-12-07 11:50:07.601067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.292 [2024-12-07 11:50:07.601307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.292 [2024-12-07 11:50:07.601545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.292 [2024-12-07 11:50:07.601558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.292 [2024-12-07 11:50:07.601568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.292 [2024-12-07 11:50:07.601578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.292 [2024-12-07 11:50:07.614683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.292 [2024-12-07 11:50:07.615399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.292 [2024-12-07 11:50:07.615452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.292 [2024-12-07 11:50:07.615468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.292 [2024-12-07 11:50:07.615739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.292 [2024-12-07 11:50:07.615985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.292 [2024-12-07 11:50:07.616000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.292 [2024-12-07 11:50:07.616020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.292 [2024-12-07 11:50:07.616032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.292 [2024-12-07 11:50:07.628744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.292 [2024-12-07 11:50:07.629347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.292 [2024-12-07 11:50:07.629372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.292 [2024-12-07 11:50:07.629385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.292 [2024-12-07 11:50:07.629626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.292 [2024-12-07 11:50:07.629865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.292 [2024-12-07 11:50:07.629877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.292 [2024-12-07 11:50:07.629887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.292 [2024-12-07 11:50:07.629897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.555 [2024-12-07 11:50:07.642999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.555 [2024-12-07 11:50:07.643598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.555 [2024-12-07 11:50:07.643622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.555 [2024-12-07 11:50:07.643633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.555 [2024-12-07 11:50:07.643873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.555 [2024-12-07 11:50:07.644118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.555 [2024-12-07 11:50:07.644131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.555 [2024-12-07 11:50:07.644142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.555 [2024-12-07 11:50:07.644152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.555 [2024-12-07 11:50:07.646961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:08.555 [2024-12-07 11:50:07.646991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:08.555 [2024-12-07 11:50:07.646999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:08.555 [2024-12-07 11:50:07.647009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:08.555 [2024-12-07 11:50:07.647024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:08.555 [2024-12-07 11:50:07.648744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:08.555 [2024-12-07 11:50:07.648860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.555 [2024-12-07 11:50:07.648886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:08.555 [2024-12-07 11:50:07.657257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.555 [2024-12-07 11:50:07.657869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.555 [2024-12-07 11:50:07.657894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.555 [2024-12-07 11:50:07.657907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.555 [2024-12-07 11:50:07.658153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.555 [2024-12-07 11:50:07.658394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.555 [2024-12-07 11:50:07.658407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.555 [2024-12-07 11:50:07.658418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.555 [2024-12-07 11:50:07.658430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.555 [2024-12-07 11:50:07.671527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.555 [2024-12-07 11:50:07.672221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.555 [2024-12-07 11:50:07.672271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.555 [2024-12-07 11:50:07.672287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.555 [2024-12-07 11:50:07.672562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.555 [2024-12-07 11:50:07.672807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.555 [2024-12-07 11:50:07.672821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.555 [2024-12-07 11:50:07.672833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.555 [2024-12-07 11:50:07.672845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.555 [2024-12-07 11:50:07.685738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.555 [2024-12-07 11:50:07.686363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.555 [2024-12-07 11:50:07.686389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.555 [2024-12-07 11:50:07.686401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.555 [2024-12-07 11:50:07.686642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.555 [2024-12-07 11:50:07.686882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.555 [2024-12-07 11:50:07.686896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.555 [2024-12-07 11:50:07.686918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.555 [2024-12-07 11:50:07.686929] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.555 [2024-12-07 11:50:07.699952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.555 [2024-12-07 11:50:07.700686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.555 [2024-12-07 11:50:07.700738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.555 [2024-12-07 11:50:07.700756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.555 [2024-12-07 11:50:07.701041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.555 [2024-12-07 11:50:07.701287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.555 [2024-12-07 11:50:07.701302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.555 [2024-12-07 11:50:07.701314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.555 [2024-12-07 11:50:07.701327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.555 [2024-12-07 11:50:07.714223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.555 [2024-12-07 11:50:07.714897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.555 [2024-12-07 11:50:07.714944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.555 [2024-12-07 11:50:07.714961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.555 [2024-12-07 11:50:07.715242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.555 [2024-12-07 11:50:07.715487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.555 [2024-12-07 11:50:07.715502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.555 [2024-12-07 11:50:07.715514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.555 [2024-12-07 11:50:07.715527] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.555 [2024-12-07 11:50:07.728459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.555 [2024-12-07 11:50:07.729086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.555 [2024-12-07 11:50:07.729134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.555 [2024-12-07 11:50:07.729153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.555 [2024-12-07 11:50:07.729435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.555 [2024-12-07 11:50:07.729682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.555 [2024-12-07 11:50:07.729696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.555 [2024-12-07 11:50:07.729708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.555 [2024-12-07 11:50:07.729720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.555 [2024-12-07 11:50:07.742625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.555 [2024-12-07 11:50:07.743352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.555 [2024-12-07 11:50:07.743405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.555 [2024-12-07 11:50:07.743422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.555 [2024-12-07 11:50:07.743695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.555 [2024-12-07 11:50:07.743941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.555 [2024-12-07 11:50:07.743956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.555 [2024-12-07 11:50:07.743968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.555 [2024-12-07 11:50:07.743981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.555 [2024-12-07 11:50:07.756861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.556 [2024-12-07 11:50:07.757450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.556 [2024-12-07 11:50:07.757498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.556 [2024-12-07 11:50:07.757514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.556 [2024-12-07 11:50:07.757785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.556 [2024-12-07 11:50:07.758040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.556 [2024-12-07 11:50:07.758055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.556 [2024-12-07 11:50:07.758067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.556 [2024-12-07 11:50:07.758078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.556 [2024-12-07 11:50:07.770958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.556 [2024-12-07 11:50:07.771679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.556 [2024-12-07 11:50:07.771726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.556 [2024-12-07 11:50:07.771743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.556 [2024-12-07 11:50:07.772024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.556 [2024-12-07 11:50:07.772269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.556 [2024-12-07 11:50:07.772284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.556 [2024-12-07 11:50:07.772296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.556 [2024-12-07 11:50:07.772308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.556 [2024-12-07 11:50:07.785190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.556 [2024-12-07 11:50:07.785933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.556 [2024-12-07 11:50:07.785981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.556 [2024-12-07 11:50:07.785998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.556 [2024-12-07 11:50:07.786282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.556 [2024-12-07 11:50:07.786528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.556 [2024-12-07 11:50:07.786542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.556 [2024-12-07 11:50:07.786553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.556 [2024-12-07 11:50:07.786565] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.556 [2024-12-07 11:50:07.799451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.556 [2024-12-07 11:50:07.800253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.556 [2024-12-07 11:50:07.800300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.556 [2024-12-07 11:50:07.800316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.556 [2024-12-07 11:50:07.800607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.556 [2024-12-07 11:50:07.800851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.556 [2024-12-07 11:50:07.800865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.556 [2024-12-07 11:50:07.800877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.556 [2024-12-07 11:50:07.800889] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.556 [2024-12-07 11:50:07.813545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.556 [2024-12-07 11:50:07.814003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.556 [2024-12-07 11:50:07.814110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.556 [2024-12-07 11:50:07.814123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.556 [2024-12-07 11:50:07.814364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.556 [2024-12-07 11:50:07.814603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.556 [2024-12-07 11:50:07.814616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.556 [2024-12-07 11:50:07.814626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.556 [2024-12-07 11:50:07.814636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.556 [2024-12-07 11:50:07.827740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.556 [2024-12-07 11:50:07.828335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.556 [2024-12-07 11:50:07.828383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.556 [2024-12-07 11:50:07.828399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.556 [2024-12-07 11:50:07.828671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.556 [2024-12-07 11:50:07.828923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.556 [2024-12-07 11:50:07.828938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.556 [2024-12-07 11:50:07.828949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.556 [2024-12-07 11:50:07.828961] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.556 [2024-12-07 11:50:07.841854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.556 [2024-12-07 11:50:07.842597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.556 [2024-12-07 11:50:07.842644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.556 [2024-12-07 11:50:07.842660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.556 [2024-12-07 11:50:07.842930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.556 [2024-12-07 11:50:07.843184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.556 [2024-12-07 11:50:07.843199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.556 [2024-12-07 11:50:07.843211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.556 [2024-12-07 11:50:07.843223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.556 [2024-12-07 11:50:07.856095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.556 [2024-12-07 11:50:07.856776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.556 [2024-12-07 11:50:07.856823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.556 [2024-12-07 11:50:07.856838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.556 [2024-12-07 11:50:07.857119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.556 [2024-12-07 11:50:07.857364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.556 [2024-12-07 11:50:07.857378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.556 [2024-12-07 11:50:07.857389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.556 [2024-12-07 11:50:07.857402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.556 [2024-12-07 11:50:07.870291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.556 [2024-12-07 11:50:07.871036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.556 [2024-12-07 11:50:07.871083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.556 [2024-12-07 11:50:07.871099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.556 [2024-12-07 11:50:07.871370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.556 [2024-12-07 11:50:07.871614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.556 [2024-12-07 11:50:07.871628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.556 [2024-12-07 11:50:07.871644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.556 [2024-12-07 11:50:07.871655] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.556 [2024-12-07 11:50:07.884546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.556 [2024-12-07 11:50:07.885139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.556 [2024-12-07 11:50:07.885187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.556 [2024-12-07 11:50:07.885202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.556 [2024-12-07 11:50:07.885473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.556 [2024-12-07 11:50:07.885717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.556 [2024-12-07 11:50:07.885731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.556 [2024-12-07 11:50:07.885743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.557 [2024-12-07 11:50:07.885755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.557 [2024-12-07 11:50:07.898646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.557 [2024-12-07 11:50:07.899154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.557 [2024-12-07 11:50:07.899202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.557 [2024-12-07 11:50:07.899220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.557 [2024-12-07 11:50:07.899491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.557 [2024-12-07 11:50:07.899735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.557 [2024-12-07 11:50:07.899750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.557 [2024-12-07 11:50:07.899761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.557 [2024-12-07 11:50:07.899773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.818 [2024-12-07 11:50:07.912886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.818 [2024-12-07 11:50:07.913486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.819 [2024-12-07 11:50:07.913511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.819 [2024-12-07 11:50:07.913524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.819 [2024-12-07 11:50:07.913762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.819 [2024-12-07 11:50:07.914000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.819 [2024-12-07 11:50:07.914019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.819 [2024-12-07 11:50:07.914030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.819 [2024-12-07 11:50:07.914040] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.819 [2024-12-07 11:50:07.927146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.819 [2024-12-07 11:50:07.927867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.819 [2024-12-07 11:50:07.927915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.819 [2024-12-07 11:50:07.927931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.819 [2024-12-07 11:50:07.928212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.819 [2024-12-07 11:50:07.928456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.819 [2024-12-07 11:50:07.928471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.819 [2024-12-07 11:50:07.928482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.819 [2024-12-07 11:50:07.928494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.819 [2024-12-07 11:50:07.941371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.819 [2024-12-07 11:50:07.942116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.819 [2024-12-07 11:50:07.942163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.819 [2024-12-07 11:50:07.942180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.819 [2024-12-07 11:50:07.942452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.819 [2024-12-07 11:50:07.942695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.819 [2024-12-07 11:50:07.942709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.819 [2024-12-07 11:50:07.942720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.819 [2024-12-07 11:50:07.942733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.819 [2024-12-07 11:50:07.955606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.819 [2024-12-07 11:50:07.956327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.819 [2024-12-07 11:50:07.956375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.819 [2024-12-07 11:50:07.956391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.819 [2024-12-07 11:50:07.956662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.819 [2024-12-07 11:50:07.956905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.819 [2024-12-07 11:50:07.956920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.819 [2024-12-07 11:50:07.956931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.819 [2024-12-07 11:50:07.956943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.819 [2024-12-07 11:50:07.969813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.819 [2024-12-07 11:50:07.970553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.819 [2024-12-07 11:50:07.970605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.819 [2024-12-07 11:50:07.970621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.819 [2024-12-07 11:50:07.970892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.819 [2024-12-07 11:50:07.971145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.819 [2024-12-07 11:50:07.971161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.819 [2024-12-07 11:50:07.971172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.819 [2024-12-07 11:50:07.971184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.819 [2024-12-07 11:50:07.984071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.819 [2024-12-07 11:50:07.984752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.819 [2024-12-07 11:50:07.984799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.819 [2024-12-07 11:50:07.984815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.819 [2024-12-07 11:50:07.985095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.819 [2024-12-07 11:50:07.985339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.819 [2024-12-07 11:50:07.985354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.819 [2024-12-07 11:50:07.985365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.819 [2024-12-07 11:50:07.985377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.819 [2024-12-07 11:50:07.998245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.819 [2024-12-07 11:50:07.998731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.819 [2024-12-07 11:50:07.998756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.819 [2024-12-07 11:50:07.998768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.819 [2024-12-07 11:50:07.999007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.819 [2024-12-07 11:50:07.999252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.819 [2024-12-07 11:50:07.999265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.819 [2024-12-07 11:50:07.999275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.819 [2024-12-07 11:50:07.999285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.819 [2024-12-07 11:50:08.012391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.819 [2024-12-07 11:50:08.013116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.819 [2024-12-07 11:50:08.013164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.819 [2024-12-07 11:50:08.013181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.819 [2024-12-07 11:50:08.013457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.819 [2024-12-07 11:50:08.013701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.819 [2024-12-07 11:50:08.013715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.819 [2024-12-07 11:50:08.013726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.819 [2024-12-07 11:50:08.013738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.819 [2024-12-07 11:50:08.026631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.819 [2024-12-07 11:50:08.027156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.819 [2024-12-07 11:50:08.027204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.819 [2024-12-07 11:50:08.027221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.819 [2024-12-07 11:50:08.027493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.819 [2024-12-07 11:50:08.027736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.819 [2024-12-07 11:50:08.027750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.819 [2024-12-07 11:50:08.027761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.819 [2024-12-07 11:50:08.027773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.819 [2024-12-07 11:50:08.040892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.819 [2024-12-07 11:50:08.041507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.819 [2024-12-07 11:50:08.041533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.819 [2024-12-07 11:50:08.041545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.819 [2024-12-07 11:50:08.041783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.819 [2024-12-07 11:50:08.042028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.820 [2024-12-07 11:50:08.042042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.820 [2024-12-07 11:50:08.042052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.820 [2024-12-07 11:50:08.042062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.820 [2024-12-07 11:50:08.055149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.820 [2024-12-07 11:50:08.055864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.820 [2024-12-07 11:50:08.055912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.820 [2024-12-07 11:50:08.055927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.820 [2024-12-07 11:50:08.056206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.820 [2024-12-07 11:50:08.056451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.820 [2024-12-07 11:50:08.056470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.820 [2024-12-07 11:50:08.056482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.820 [2024-12-07 11:50:08.056494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.820 [2024-12-07 11:50:08.069363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.820 [2024-12-07 11:50:08.070127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.820 [2024-12-07 11:50:08.070175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.820 [2024-12-07 11:50:08.070193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.820 [2024-12-07 11:50:08.070464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.820 [2024-12-07 11:50:08.070708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.820 [2024-12-07 11:50:08.070722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.820 [2024-12-07 11:50:08.070734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.820 [2024-12-07 11:50:08.070745] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.820 [2024-12-07 11:50:08.083411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.820 [2024-12-07 11:50:08.084236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.820 [2024-12-07 11:50:08.084284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.820 [2024-12-07 11:50:08.084299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.820 [2024-12-07 11:50:08.084570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.820 [2024-12-07 11:50:08.084814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.820 [2024-12-07 11:50:08.084828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.820 [2024-12-07 11:50:08.084839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.820 [2024-12-07 11:50:08.084851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.820 3748.33 IOPS, 14.64 MiB/s [2024-12-07T10:50:08.174Z] [2024-12-07 11:50:08.099253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.820 [2024-12-07 11:50:08.099830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.820 [2024-12-07 11:50:08.099904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.820 [2024-12-07 11:50:08.099920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.820 [2024-12-07 11:50:08.100200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.820 [2024-12-07 11:50:08.100448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.820 [2024-12-07 11:50:08.100462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.820 [2024-12-07 11:50:08.100477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.820 [2024-12-07 11:50:08.100489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.820 [2024-12-07 11:50:08.113384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.820 [2024-12-07 11:50:08.114114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.820 [2024-12-07 11:50:08.114161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.820 [2024-12-07 11:50:08.114176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.820 [2024-12-07 11:50:08.114447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.820 [2024-12-07 11:50:08.114691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.820 [2024-12-07 11:50:08.114705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.820 [2024-12-07 11:50:08.114717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.820 [2024-12-07 11:50:08.114730] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.820 [2024-12-07 11:50:08.127618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.820 [2024-12-07 11:50:08.128319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.820 [2024-12-07 11:50:08.128366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.820 [2024-12-07 11:50:08.128382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.820 [2024-12-07 11:50:08.128653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.820 [2024-12-07 11:50:08.128897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.820 [2024-12-07 11:50:08.128912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.820 [2024-12-07 11:50:08.128923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.820 [2024-12-07 11:50:08.128936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.820 [2024-12-07 11:50:08.141815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.820 [2024-12-07 11:50:08.142411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.820 [2024-12-07 11:50:08.142436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.820 [2024-12-07 11:50:08.142448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.820 [2024-12-07 11:50:08.142686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.820 [2024-12-07 11:50:08.142924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.820 [2024-12-07 11:50:08.142937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.820 [2024-12-07 11:50:08.142947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.820 [2024-12-07 11:50:08.142957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.820 [2024-12-07 11:50:08.156036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.820 [2024-12-07 11:50:08.156614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.820 [2024-12-07 11:50:08.156660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:08.820 [2024-12-07 11:50:08.156677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:08.820 [2024-12-07 11:50:08.156949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:08.820 [2024-12-07 11:50:08.157203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.820 [2024-12-07 11:50:08.157218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.820 [2024-12-07 11:50:08.157229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.820 [2024-12-07 11:50:08.157241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.082 [2024-12-07 11:50:08.170098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.082 [2024-12-07 11:50:08.170863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.082 [2024-12-07 11:50:08.170911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:09.082 [2024-12-07 11:50:08.170927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:09.082 [2024-12-07 11:50:08.171205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:09.082 [2024-12-07 11:50:08.171450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.082 [2024-12-07 11:50:08.171465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.082 [2024-12-07 11:50:08.171476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.082 [2024-12-07 11:50:08.171487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.082 [2024-12-07 11:50:08.184356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.082 [2024-12-07 11:50:08.184810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.082 [2024-12-07 11:50:08.184835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:09.082 [2024-12-07 11:50:08.184847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:09.082 [2024-12-07 11:50:08.185091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:09.082 [2024-12-07 11:50:08.185331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.082 [2024-12-07 11:50:08.185344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.082 [2024-12-07 11:50:08.185354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.082 [2024-12-07 11:50:08.185364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.082 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:09.082 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:38:09.082 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:09.082 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:09.082 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.082 [2024-12-07 11:50:08.198475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.082 [2024-12-07 11:50:08.199117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.082 [2024-12-07 11:50:08.199165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:09.082 [2024-12-07 11:50:08.199182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:09.082 [2024-12-07 11:50:08.199454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:09.082 [2024-12-07 11:50:08.199697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.082 [2024-12-07 11:50:08.199712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.082 [2024-12-07 11:50:08.199723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.083 [2024-12-07 11:50:08.199736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.083 [2024-12-07 11:50:08.212635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.083 [2024-12-07 11:50:08.213222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.083 [2024-12-07 11:50:08.213249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:09.083 [2024-12-07 11:50:08.213262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:09.083 [2024-12-07 11:50:08.213500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:09.083 [2024-12-07 11:50:08.213739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.083 [2024-12-07 11:50:08.213752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.083 [2024-12-07 11:50:08.213762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.083 [2024-12-07 11:50:08.213773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.083 [2024-12-07 11:50:08.226875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.083 [2024-12-07 11:50:08.227500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.083 [2024-12-07 11:50:08.227524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:09.083 [2024-12-07 11:50:08.227536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:09.083 [2024-12-07 11:50:08.227773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:09.083 [2024-12-07 11:50:08.228018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.083 [2024-12-07 11:50:08.228031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.083 [2024-12-07 11:50:08.228041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.083 [2024-12-07 11:50:08.228051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.083 [2024-12-07 11:50:08.236202] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.083 [2024-12-07 11:50:08.241131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.083 [2024-12-07 11:50:08.241805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.083 [2024-12-07 11:50:08.241853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:09.083 [2024-12-07 11:50:08.241869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.083 [2024-12-07 11:50:08.242149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:09.083 [2024-12-07 11:50:08.242394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.083 [2024-12-07 11:50:08.242408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.083 [2024-12-07 11:50:08.242420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.083 [2024-12-07 11:50:08.242431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.083 [2024-12-07 11:50:08.255287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.083 [2024-12-07 11:50:08.255849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.083 [2024-12-07 11:50:08.255896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:09.083 [2024-12-07 11:50:08.255911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:09.083 [2024-12-07 11:50:08.256193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:09.083 [2024-12-07 11:50:08.256437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.083 [2024-12-07 11:50:08.256451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.083 [2024-12-07 11:50:08.256463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.083 [2024-12-07 11:50:08.256475] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.083 [2024-12-07 11:50:08.269341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.083 [2024-12-07 11:50:08.270019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.083 [2024-12-07 11:50:08.270045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:09.083 [2024-12-07 11:50:08.270058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:09.083 [2024-12-07 11:50:08.270298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:09.083 [2024-12-07 11:50:08.270541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.083 [2024-12-07 11:50:08.270555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.083 [2024-12-07 11:50:08.270565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.083 [2024-12-07 11:50:08.270575] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.083 [2024-12-07 11:50:08.283441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.083 [2024-12-07 11:50:08.284056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.083 [2024-12-07 11:50:08.284102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:09.083 [2024-12-07 11:50:08.284118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:09.083 [2024-12-07 11:50:08.284389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:09.083 [2024-12-07 11:50:08.284632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.083 [2024-12-07 11:50:08.284645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.083 [2024-12-07 11:50:08.284657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.083 [2024-12-07 11:50:08.284669] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.083 Malloc0 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.083 [2024-12-07 11:50:08.297575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.083 [2024-12-07 11:50:08.298064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.083 [2024-12-07 11:50:08.298098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:09.083 [2024-12-07 11:50:08.298112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:09.083 [2024-12-07 11:50:08.298366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:09.083 [2024-12-07 11:50:08.298605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.083 [2024-12-07 11:50:08.298618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.083 [2024-12-07 11:50:08.298628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.083 [2024-12-07 11:50:08.298638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.083 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.083 [2024-12-07 11:50:08.311768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.083 [2024-12-07 11:50:08.312474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.083 [2024-12-07 11:50:08.312521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:09.083 [2024-12-07 11:50:08.312536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:09.083 [2024-12-07 11:50:08.312809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:09.083 [2024-12-07 11:50:08.313061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.083 [2024-12-07 11:50:08.313078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.083 [2024-12-07 11:50:08.313089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.083 [2024-12-07 11:50:08.313101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.084 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.084 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:09.084 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.084 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.084 [2024-12-07 11:50:08.325995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.084 [2024-12-07 11:50:08.326616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.084 [2024-12-07 11:50:08.326641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:38:09.084 [2024-12-07 11:50:08.326653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:38:09.084 [2024-12-07 11:50:08.326893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:38:09.084 [2024-12-07 11:50:08.327125] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.084 [2024-12-07 11:50:08.327138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.084 [2024-12-07 11:50:08.327151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.084 [2024-12-07 11:50:08.327162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.084 [2024-12-07 11:50:08.327172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.084 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.084 11:50:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2782220 00:38:09.084 [2024-12-07 11:50:08.340036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.084 [2024-12-07 11:50:08.376698] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:38:10.968 4280.00 IOPS, 16.72 MiB/s [2024-12-07T10:50:11.263Z] 5018.38 IOPS, 19.60 MiB/s [2024-12-07T10:50:12.206Z] 5578.00 IOPS, 21.79 MiB/s [2024-12-07T10:50:13.148Z] 6037.70 IOPS, 23.58 MiB/s [2024-12-07T10:50:14.528Z] 6413.27 IOPS, 25.05 MiB/s [2024-12-07T10:50:15.467Z] 6716.00 IOPS, 26.23 MiB/s [2024-12-07T10:50:16.405Z] 6978.54 IOPS, 27.26 MiB/s [2024-12-07T10:50:17.347Z] 7204.50 IOPS, 28.14 MiB/s 00:38:17.993 Latency(us) 00:38:17.993 [2024-12-07T10:50:17.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:17.993 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:17.993 Verification LBA range: start 0x0 length 0x4000 00:38:17.993 Nvme1n1 : 15.01 7393.33 28.88 9238.25 0.00 7668.13 907.95 26105.17 00:38:17.993 [2024-12-07T10:50:17.348Z] =================================================================================================================== 00:38:17.994 [2024-12-07T10:50:17.348Z] Total : 7393.33 28.88 9238.25 0.00 7668.13 907.95 26105.17 00:38:18.564 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:18.564 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:18.564 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.564 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:18.564 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.564 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:18.564 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:18.564 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:18.564 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:38:18.564 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:18.565 rmmod nvme_tcp 00:38:18.565 rmmod nvme_fabrics 00:38:18.565 rmmod nvme_keyring 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2783389 ']' 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2783389 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2783389 ']' 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2783389 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2783389 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2783389' 00:38:18.565 killing process with pid 2783389 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2783389 00:38:18.565 11:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2783389 00:38:19.506 11:50:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:19.506 11:50:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:19.506 11:50:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:19.506 11:50:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:38:19.506 11:50:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:19.506 11:50:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:38:19.506 11:50:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:38:19.506 11:50:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:19.506 11:50:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:19.506 11:50:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:19.506 11:50:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:19.506 11:50:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:21.420 11:50:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:21.420 00:38:21.420 real 0m30.206s 00:38:21.420 user 1m10.462s 00:38:21.420 sys 0m7.700s 00:38:21.420 11:50:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:21.420 11:50:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:21.420 ************************************ 00:38:21.420 END TEST nvmf_bdevperf 00:38:21.420 ************************************ 00:38:21.420 11:50:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:21.420 11:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:21.420 11:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:21.420 11:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.420 ************************************ 00:38:21.420 START TEST nvmf_target_disconnect 00:38:21.420 ************************************ 00:38:21.420 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:21.683 * Looking for test storage... 00:38:21.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:21.683 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:21.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.684 --rc genhtml_branch_coverage=1 00:38:21.684 --rc genhtml_function_coverage=1 00:38:21.684 --rc genhtml_legend=1 00:38:21.684 --rc geninfo_all_blocks=1 00:38:21.684 --rc geninfo_unexecuted_blocks=1 00:38:21.684 00:38:21.684 ' 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:21.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.684 --rc genhtml_branch_coverage=1 00:38:21.684 --rc genhtml_function_coverage=1 00:38:21.684 --rc genhtml_legend=1 00:38:21.684 --rc geninfo_all_blocks=1 00:38:21.684 --rc geninfo_unexecuted_blocks=1 00:38:21.684 00:38:21.684 ' 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:21.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.684 --rc genhtml_branch_coverage=1 00:38:21.684 --rc genhtml_function_coverage=1 00:38:21.684 --rc genhtml_legend=1 00:38:21.684 --rc geninfo_all_blocks=1 00:38:21.684 --rc geninfo_unexecuted_blocks=1 00:38:21.684 00:38:21.684 ' 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:21.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.684 --rc genhtml_branch_coverage=1 00:38:21.684 --rc genhtml_function_coverage=1 00:38:21.684 --rc genhtml_legend=1 00:38:21.684 --rc geninfo_all_blocks=1 00:38:21.684 --rc geninfo_unexecuted_blocks=1 00:38:21.684 00:38:21.684 ' 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:21.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:38:21.684 11:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:29.837 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:29.837 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:29.837 Found net devices under 0000:31:00.0: cvl_0_0 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:29.837 Found net devices under 0000:31:00.1: cvl_0_1 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:29.837 11:50:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:29.837 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:29.837 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:29.837 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:29.837 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:29.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:29.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:38:29.838 00:38:29.838 --- 10.0.0.2 ping statistics --- 00:38:29.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:29.838 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:29.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:29.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:38:29.838 00:38:29.838 --- 10.0.0.1 ping statistics --- 00:38:29.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:29.838 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:29.838 ************************************ 00:38:29.838 START TEST nvmf_target_disconnect_tc1 00:38:29.838 ************************************ 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:29.838 [2024-12-07 11:50:28.346916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.838 [2024-12-07 11:50:28.347004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039df80 with addr=10.0.0.2, port=4420 00:38:29.838 [2024-12-07 11:50:28.347071] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:29.838 [2024-12-07 11:50:28.347089] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:29.838 [2024-12-07 11:50:28.347102] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:38:29.838 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:29.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:29.838 Initializing NVMe Controllers 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:29.838 00:38:29.838 real 0m0.213s 00:38:29.838 user 0m0.087s 00:38:29.838 sys 0m0.127s 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:29.838 ************************************ 00:38:29.838 END TEST nvmf_target_disconnect_tc1 00:38:29.838 ************************************ 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:29.838 ************************************ 00:38:29.838 START TEST nvmf_target_disconnect_tc2 00:38:29.838 ************************************ 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2789679 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2789679 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2789679 ']' 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:29.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:29.838 11:50:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:29.838 [2024-12-07 11:50:28.550897] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:38:29.838 [2024-12-07 11:50:28.551008] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:29.838 [2024-12-07 11:50:28.684644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:29.838 [2024-12-07 11:50:28.785552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:29.838 [2024-12-07 11:50:28.785596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:29.838 [2024-12-07 11:50:28.785607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:29.838 [2024-12-07 11:50:28.785619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:29.838 [2024-12-07 11:50:28.785630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:29.838 [2024-12-07 11:50:28.787881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:29.838 [2024-12-07 11:50:28.788005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:29.838 [2024-12-07 11:50:28.788114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:29.838 [2024-12-07 11:50:28.788138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:30.101 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:30.101 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:30.101 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:30.101 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:30.101 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.101 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:30.101 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:30.101 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.101 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.363 Malloc0 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.363 [2024-12-07 11:50:29.468253] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.363 [2024-12-07 11:50:29.510636] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2790015 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:30.363 11:50:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:32.280 11:50:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2789679 00:38:32.280 11:50:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 [2024-12-07 11:50:31.556782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Read completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 Write completed with error (sct=0, sc=8) 00:38:32.280 starting I/O failed 00:38:32.280 [2024-12-07 11:50:31.557148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:32.280 [2024-12-07 11:50:31.557553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-12-07 11:50:31.557595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-12-07 11:50:31.557935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-12-07 11:50:31.557948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-12-07 11:50:31.558257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-12-07 11:50:31.558290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-12-07 11:50:31.558496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-12-07 11:50:31.558508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-12-07 11:50:31.558882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-12-07 11:50:31.558892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-12-07 11:50:31.559362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-12-07 11:50:31.559395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-12-07 11:50:31.559742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-12-07 11:50:31.559755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-12-07 11:50:31.560082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-12-07 11:50:31.560092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.280 [2024-12-07 11:50:31.560407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.280 [2024-12-07 11:50:31.560417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.280 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.560600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.560609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.560803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.560813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.561108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.561118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.561300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.561311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.561524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.561537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.561715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.561726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.562081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.562091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.562459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.562469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.562772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.562782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.563127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.563138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.563455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.563465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.563651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.563661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.563988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.563998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.564320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.564330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.564632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.564642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.564931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.564942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.565310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.565320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.565647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.565657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.565999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.566009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.566207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.566217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.566509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.566519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.566818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.566828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.567023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.567034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.567414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.567424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.567746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.567756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.568083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.568095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.568399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.568409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.568740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.568750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.568979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.568990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.569197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.569208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.569505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.569515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.569702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.569713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.569984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.569994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.570309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.570320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.570612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.570622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.570936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.570946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.281 [2024-12-07 11:50:31.571178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.281 [2024-12-07 11:50:31.571188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.281 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.571497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.571507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.571856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.571867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.572190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.572199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.572492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.572501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.572697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.572707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.573021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.573031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.573345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.573354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.573596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.573607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.573938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.573947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.574250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.574260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.574561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.574570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.574857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.574867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.575199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.575209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.575503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.575512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.575813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.575822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.576114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.576123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.576327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.576338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.576658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.576673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.576969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.576979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.577327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.577338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.577448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.577457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.577550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.577560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.577754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.577766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.578189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.578199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.578424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.578434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.578737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.578746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.578945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.578955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.579275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.579284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.579481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.579490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.579723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.579732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.579922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.579931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.580227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.580237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.580535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.580545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.580875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.580884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.581182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.581193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.581503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.581512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.581804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.282 [2024-12-07 11:50:31.581815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.282 qpair failed and we were unable to recover it. 00:38:32.282 [2024-12-07 11:50:31.582089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.582099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.582355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.582365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.582662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.582671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.582974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.582984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.583271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.583280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.583560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.583570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.583877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.583887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.584222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.584233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.584394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.584404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.584699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.584709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.584989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.584999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.585363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.585373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.585678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.585687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.586019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.586029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.586330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.586340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.586650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.586659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.586963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.586972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.587310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.587319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.587452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.587462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.587658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.587667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.587880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.587889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.588226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.588236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.588535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.588545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.588838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.588847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.589172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.589182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.589493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.589502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.589686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.589696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.590051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.590061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.590351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.590360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.590571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.590581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.590848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.590857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.591034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.591046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.591361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.591371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.591532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.591541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.591809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.591819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.592120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.592130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.592441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.592450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.592747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.283 [2024-12-07 11:50:31.592758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.283 qpair failed and we were unable to recover it. 00:38:32.283 [2024-12-07 11:50:31.593039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.593049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.593363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.593372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.593662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.593672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.593979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.593988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.594284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.594294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.594598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.594608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.594817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.594832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.595132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.595142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.595472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.595482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.595786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.595796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.596116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.596125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.596436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.596445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.596758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.596767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.597081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.597091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.597405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.597415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.597704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.597713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.597916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.597926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.598115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.598125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.598422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.598431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.598745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.598754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.599043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.599052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.599375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.599384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.599699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.599709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.599995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.600004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.600378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.600388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.600702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.600712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.601050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.284 [2024-12-07 11:50:31.601060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.284 qpair failed and we were unable to recover it. 00:38:32.284 [2024-12-07 11:50:31.601244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.601253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.601568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.601577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.601864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.601873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.602188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.602198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.602506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.602516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.602835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.602844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.603154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.603164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.603487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.603496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.603785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.603794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.603974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.603984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.604306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.604316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.604607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.604616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.604914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.604925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.605228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.605237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.605511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.605528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.605831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.605841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.606142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.606152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.606446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.606455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.606766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.606775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.606956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.606966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.607403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.607413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.607693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.607708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.608035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.608045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.608333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.608342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.608651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.608660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.608967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.608976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.609253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.609270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.609576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.609585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.609893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.609902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.610179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.610188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.610489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.610498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.610810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.610819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.611123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.611133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.611445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.611454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.611652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.611661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.285 [2024-12-07 11:50:31.612007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.285 [2024-12-07 11:50:31.612033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.285 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.612301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.612310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.612622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.612631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.612887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.612897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.613176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.613186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.613397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.613407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.613621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.613630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.613936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.613946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.614241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.614274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.614443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.614453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.614798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.614807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.615087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.615097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.615417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.615427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.615739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.615748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.616023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.616032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.616342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.616352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.616661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.616670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.616981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.616992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.617281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.617290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.617599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.617608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.617805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.617816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.618124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.618133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.618544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.618553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.618855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.618864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.619168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.619178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.619473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.619489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.619788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.619797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.620085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.620095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.620411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.620422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.620725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.620735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.621031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.621041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.621209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.621219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.621488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.621497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.621796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.621805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.621976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.621986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.622299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.622310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.622463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.622473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.622794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.286 [2024-12-07 11:50:31.622803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.286 qpair failed and we were unable to recover it. 00:38:32.286 [2024-12-07 11:50:31.623087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.623096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.287 [2024-12-07 11:50:31.623414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.623423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.287 [2024-12-07 11:50:31.623611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.623620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.287 [2024-12-07 11:50:31.623804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.623813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.287 [2024-12-07 11:50:31.624140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.624149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.287 [2024-12-07 11:50:31.624319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.624330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.287 [2024-12-07 11:50:31.624716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.624726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.287 [2024-12-07 11:50:31.624965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.624974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.287 [2024-12-07 11:50:31.625186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.625196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.287 [2024-12-07 11:50:31.625516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.625526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.287 [2024-12-07 11:50:31.625894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.625904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.287 [2024-12-07 11:50:31.626207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.626217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.287 [2024-12-07 11:50:31.626532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.626542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.287 [2024-12-07 11:50:31.626836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.287 [2024-12-07 11:50:31.626847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.287 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.627179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.627190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.627502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.627512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.627721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.627730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.627921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.627931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.628255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.628265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.628584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.628596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.628779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.628790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.629105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.629117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.629395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.629404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.629730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.629739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.630035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.630044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.630377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.630387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.630684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.630693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.630999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.631009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.631175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.631185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.631392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.561 [2024-12-07 11:50:31.631401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.561 qpair failed and we were unable to recover it. 00:38:32.561 [2024-12-07 11:50:31.631667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.631676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.631968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.631978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.632282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.632293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.632601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.632612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.632901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.632915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.633238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.633248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.633545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.633554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.633739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.633748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.634117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.634127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.634442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.634451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.634737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.634746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.635052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.635062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.635372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.635381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.635709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.635718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.635915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.635924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.636119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.636130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.636439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.636448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.636624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.636633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.636825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.636834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.637037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.637048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.637318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.637328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.637636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.637645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.637973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.637983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.638358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.638369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.638674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.638684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.638872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.638882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.639191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.639200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.639493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.639509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.639809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.639818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.640133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.640145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.640453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.640462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.640773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.640783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.640972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.640981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.641258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.641268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.641564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.641573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.641889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.641898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.642207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.642217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.562 [2024-12-07 11:50:31.642500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.562 [2024-12-07 11:50:31.642510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.562 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.642818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.642828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.643139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.643149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.643477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.643487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.643793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.643802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.644103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.644112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.644429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.644438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.644744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.644753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.645080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.645089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.645403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.645413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.645686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.645696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.646004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.646016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.646207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.646217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.646397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.646406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.646605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.646614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.646966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.646976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.647301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.647310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.647616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.647625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.647783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.647794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.648078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.648088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.648391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.648407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.648762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.648771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.648965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.648974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.649178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.649188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.649412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.649421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.649742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.649752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.650062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.650072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.650257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.650266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.650591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.650600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.650912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.650921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.651316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.651327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.651609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.651623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.651926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.651936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.652222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.652232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.652538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.652547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.652851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.652861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.653147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.653157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.653459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.653469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.563 qpair failed and we were unable to recover it. 00:38:32.563 [2024-12-07 11:50:31.653778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.563 [2024-12-07 11:50:31.653789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.654076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.654086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.654261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.654270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.654552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.654562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.654716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.654727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.655057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.655066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.655363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.655372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.655670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.655679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.656008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.656023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.656327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.656337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.656627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.656637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.656932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.656941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.657138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.657148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.657486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.657495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.657802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.657812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.658121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.658131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.658425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.658435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.658774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.658783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.659077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.659087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.659409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.659418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.659705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.659715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.660025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.660034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.660223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.660233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.660466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.660475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.660832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.660842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.661129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.661138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.661459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.661469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.661782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.661791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.661954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.661964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.662237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.662246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.662563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.662573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.662960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.662970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.663299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.663309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.663621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.663630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.663903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.663914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.664301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.664310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.664594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.664604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.664878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.564 [2024-12-07 11:50:31.664887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.564 qpair failed and we were unable to recover it. 00:38:32.564 [2024-12-07 11:50:31.665186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.665196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.665511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.665520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.665803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.665812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.666120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.666130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.666439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.666448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.666748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.666758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.667071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.667080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.667383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.667393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.667679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.667689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.668016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.668026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.668353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.668363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.668664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.668673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.668981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.668991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.669290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.669301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.669599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.669609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.669908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.669918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.670212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.670222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.670545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.670555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.670934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.670943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.671288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.671303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.671589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.671598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.671772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.671782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.672146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.672155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.672480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.672490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.672761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.672770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.672964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.672973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.673311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.673321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.673620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.673629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.673955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.673964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.674278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.674288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.674597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.674607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.674914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.674923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.675226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.675236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.675425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.675434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.675783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.675793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.676099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.676109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.676415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.676426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.676808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.565 [2024-12-07 11:50:31.676817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.565 qpair failed and we were unable to recover it. 00:38:32.565 [2024-12-07 11:50:31.677117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.677127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.677442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.677451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.677757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.677766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.678057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.678067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.678350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.678360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.678636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.678645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.678947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.678957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.679174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.679183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.679501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.679511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.679822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.679831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.680119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.680129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.680446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.680455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.680767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.680776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.681106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.681115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.681320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.681330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.681530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.681540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.681840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.681849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.682068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.682078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.682395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.682404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.682699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.682709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.683009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.683022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.683349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.683358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.683677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.683686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.683870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.683880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.684075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.684085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.684412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.684422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.684733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.684742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.685040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.685050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.685368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.685377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.685683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.685692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.686000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.686009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.686352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.686362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.566 qpair failed and we were unable to recover it. 00:38:32.566 [2024-12-07 11:50:31.686552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.566 [2024-12-07 11:50:31.686562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.686879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.686889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.687184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.687193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.687514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.687523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.687826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.687836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.688137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.688146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.688456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.688467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.688852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.688861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.689186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.689196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.689353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.689364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.689669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.689678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.689984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.689993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.690278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.690288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.690582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.690597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.690915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.690924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.691291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.691300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.691491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.691502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.691820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.691830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.692147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.692157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.692446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.692455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.692773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.692783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.693090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.693100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.693420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.693429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.693734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.693743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.694047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.694056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.694248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.694257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.694588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.694597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.694922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.694932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.695254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.695263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.695569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.695579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.695885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.695895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.696229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.696239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.696465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.696474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.696782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.696792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.697148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.697158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.697459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.697469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.697674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.697683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.698026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.567 [2024-12-07 11:50:31.698035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.567 qpair failed and we were unable to recover it. 00:38:32.567 [2024-12-07 11:50:31.698364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.698373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.698678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.698687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.698975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.698984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.699307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.699317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.699623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.699633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.699928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.699938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.700256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.700266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.700577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.700586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.700781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.700792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.701118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.701128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.701447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.701456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.701806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.701815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.702019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.702029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.702388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.702397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.702688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.702698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.703009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.703020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.703195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.703205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.703522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.703531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.703853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.703862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.704214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.704224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.704524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.704534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.704837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.704847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.705151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.705160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.705451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.705460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.705776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.705785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.706064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.706073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.706426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.706435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.706723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.706741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.707048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.707057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.707362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.707371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.707679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.707688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.707999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.708008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.708301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.708312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.708625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.708634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.708790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.708800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.708974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.708984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.709293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.709303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.568 qpair failed and we were unable to recover it. 00:38:32.568 [2024-12-07 11:50:31.709609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.568 [2024-12-07 11:50:31.709618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.709851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.709860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.710235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.710249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.710585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.710593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.710893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.710903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.711243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.711253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.711570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.711579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.711882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.711892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.712194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.712204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.712512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.712522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.712819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.712829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.713027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.713038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.713344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.713353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.713660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.713669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.713979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.713989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.714290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.714299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.714605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.714614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.714888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.714898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.715098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.715108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.715440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.715450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.715756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.715766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.716078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.716088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.716377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.716387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.716705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.716714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.717021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.717030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.717344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.717353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.717659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.717669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.717992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.718001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.718375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.718385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.718543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.718553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.718974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.718983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.719298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.719308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.719621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.719631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.719924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.719934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.720238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.720248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.720529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.720539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.720842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.720851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.721137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.569 [2024-12-07 11:50:31.721147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.569 qpair failed and we were unable to recover it. 00:38:32.569 [2024-12-07 11:50:31.721358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.721368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.721590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.721599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.721902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.721911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.722218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.722228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.722536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.722545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.722946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.722956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.723254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.723263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.723578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.723588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.723914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.723923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.724231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.724241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.724546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.724555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.724850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.724859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.725183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.725193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.725485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.725497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.725786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.725795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.725951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.725961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.726305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.726314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.726623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.726632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.726938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.726947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.727233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.727244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.727529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.727538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.727840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.727850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.728203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.728214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.728510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.728521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.728827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.728836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.729119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.729129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.729444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.729453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.729734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.729744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.730013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.730026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.730408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.730417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.730691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.730700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.731020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.731030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.731327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.731336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.731656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.570 [2024-12-07 11:50:31.731665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.570 qpair failed and we were unable to recover it. 00:38:32.570 [2024-12-07 11:50:31.731973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.731982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.732288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.732299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.732598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.732606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.732892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.732902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.733180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.733189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.733355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.733366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.733687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.733697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.733972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.733981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.734293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.734302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.734486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.734496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.734832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.734841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.735164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.735173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.735493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.735502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.735782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.735792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.736077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.736087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.736414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.736423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.736771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.736781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.737139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.737148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.737433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.737442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.737731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.737742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.737963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.737972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.738287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.738297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.738601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.738611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.738841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.738850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.739176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.739186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.739503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.739513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.739891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.739901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.740210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.740220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.740502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.740512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.740820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.740830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.741117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.741127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.741453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.741462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.741771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.741781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.742078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.742088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.742256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.742267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.742466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.742475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.742829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.742839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.571 [2024-12-07 11:50:31.743050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.571 [2024-12-07 11:50:31.743060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.571 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.743366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.743375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.743700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.743709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.744038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.744048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.744353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.744363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.744745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.744754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.745063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.745073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.745399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.745410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.745711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.745721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.746056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.746067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.746348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.746357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.746674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.746683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.747038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.747047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.747366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.747375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.747655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.747664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.747951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.747960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.748140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.748150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.748314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.748323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.748613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.748622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.748782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.748791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.749101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.749111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.749431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.749444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.749600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.749610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.749918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.749927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.750253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.750264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.750569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.750578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.750736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.750746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.751102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.751112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.751431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.751440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.751751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.751760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.752043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.752052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.752260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.752269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.752455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.752465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.752664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.752674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.752998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.753007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.753294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.753304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.753591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.753602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.753864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.753873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.572 [2024-12-07 11:50:31.754075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.572 [2024-12-07 11:50:31.754086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.572 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.754400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.754410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.754718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.754728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.755036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.755046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.755360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.755369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.755642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.755651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.755959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.755968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.756270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.756280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.756584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.756593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.756898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.756907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.757188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.757198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.757501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.757513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.757825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.757835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.758032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.758042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.758321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.758331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.758504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.758514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.758818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.758827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.759152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.759161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.759469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.759479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.759631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.759642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.759979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.759989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.760292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.760301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.760600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.760609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.760820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.760829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.761144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.761153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.761471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.761481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.761798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.761809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.762116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.762126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.762459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.762469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.762601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.762610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.762931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.762941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.763114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.763124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.763486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.763495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.763877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.763887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.764166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.764176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.764558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.764567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.764840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.764849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.765133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.765142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.573 qpair failed and we were unable to recover it. 00:38:32.573 [2024-12-07 11:50:31.765420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.573 [2024-12-07 11:50:31.765430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.765750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.765759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.766045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.766054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.766347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.766356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.766613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.766622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.766930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.766939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.767244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.767254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.767571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.767582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.767965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.767977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.768290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.768305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.768476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.768486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.768832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.768842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.769184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.769193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.769509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.769520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.769819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.769828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.770033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.770042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.770331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.770340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.770676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.770685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.770991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.771000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.771277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.771286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.771611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.771620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.772015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.772025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.772353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.772362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.772696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.772705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.772988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.772997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.773367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.773378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.773485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.773494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.773589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.773598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.773690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.773699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.774005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.774018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.774345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.774354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.774670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.774679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.774964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.774974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.775190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.775200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.775507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.775516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.775710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.775719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.775897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.775907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.776199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.776208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.574 qpair failed and we were unable to recover it. 00:38:32.574 [2024-12-07 11:50:31.776527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.574 [2024-12-07 11:50:31.776537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.776836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.776845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.777164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.777174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.777497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.777506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.777807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.777816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.778129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.778139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.778439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.778449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.778745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.778754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.779062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.779072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.779401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.779410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.779704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.779713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.780028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.780038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.780323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.780332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.780658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.780668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.780972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.780982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.781199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.781212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.781516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.781525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.781842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.781852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.782177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.782187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.782499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.782509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.782815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.782824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.783121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.783131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.783450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.783459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.783766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.783775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.784076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.784085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.784375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.784385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.784693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.784702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.785014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.785023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.785331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.785340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.785637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.785646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.785947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.785956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.786175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.786185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.786455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.786464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.786769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.575 [2024-12-07 11:50:31.786778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.575 qpair failed and we were unable to recover it. 00:38:32.575 [2024-12-07 11:50:31.787078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.787088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.787403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.787416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.787646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.787655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.787980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.787990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.788266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.788277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.788563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.788574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.788800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.788810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.789119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.789130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.789438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.789448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.789739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.789749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.790056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.790066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.790366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.790376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.790705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.790714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.791006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.791023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.791341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.791350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.791638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.791647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.791944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.791954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.792262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.792273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.792434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.792444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.792742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.792751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.793063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.793072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.793366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.793378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.793449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.793459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.793739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.793749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.794144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.794154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.794467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.794476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.794745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.794754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.795058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.795067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.795366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.795376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.795683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.795692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.796069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.796080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.796385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.796394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.796582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.796591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.796874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.796884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.797192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.797201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.797509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.797518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.797809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.797818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.576 qpair failed and we were unable to recover it. 00:38:32.576 [2024-12-07 11:50:31.798265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.576 [2024-12-07 11:50:31.798274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.798581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.798591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.798916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.798925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.799219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.799229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.799516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.799525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.799808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.799818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.800125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.800134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.800442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.800451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.800739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.800749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.800969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.800978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.801166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.801177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.801383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.801392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.801708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.801717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.802032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.802041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.802391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.802400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.802712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.802721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.803031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.803041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.803301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.803310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.803617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.803626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.803965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.803974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.804134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.804145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.804457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.804467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.804775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.804785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.805065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.805075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.805378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.805389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.805661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.805670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.805975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.805985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.806295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.806304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.806492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.806504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.806852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.806861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.807144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.807154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.807333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.807344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.807640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.807650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.807953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.807962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.808274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.808284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.808568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.808579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.808883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.808892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.809119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.577 [2024-12-07 11:50:31.809129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.577 qpair failed and we were unable to recover it. 00:38:32.577 [2024-12-07 11:50:31.809460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.809469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.809777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.809787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.809990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.810000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.810309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.810320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.810624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.810633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.810938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.810948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.811243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.811253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.811534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.811544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.811850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.811859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.812153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.812164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.812474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.812484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.812789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.812799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.813089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.813099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.813412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.813422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.813595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.813606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.813912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.813921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.814233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.814243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.814583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.814593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.814950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.814960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.815279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.815289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.815583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.815592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.815900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.815911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.816078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.816089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.816378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.816388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.816717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.816727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.817043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.817054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.817358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.817370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.817653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.817662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.817845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.817855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.818145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.818155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.818483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.818493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.818677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.818687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.819018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.819028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.819224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.819234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.819547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.819557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.819861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.819871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.820172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.578 [2024-12-07 11:50:31.820182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.578 qpair failed and we were unable to recover it. 00:38:32.578 [2024-12-07 11:50:31.820500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.820510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.820818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.820827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.821118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.821130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.821511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.821521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.821846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.821857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.822238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.822248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.822560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.822570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.822865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.822875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.823184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.823194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.823556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.823565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.823751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.823761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.824103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.824114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.824434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.824444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.824751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.824761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.825051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.825061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.825271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.825280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.825595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.825604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.825773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.825787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.826054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.826065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.826377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.826387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.826556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.826566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.826886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.826895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.827201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.827211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.827534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.827543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.827847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.827857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.828144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.828154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.828471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.828480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.828797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.828806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.828997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.829007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.829192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.829204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.829499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.829509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.829814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.829823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.830111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.830121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.830303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.830313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.579 qpair failed and we were unable to recover it. 00:38:32.579 [2024-12-07 11:50:31.830628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.579 [2024-12-07 11:50:31.830638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.830799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.830809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.831015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.831025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.831235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.831246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.831534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.831543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.831847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.831857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.832167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.832177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.832487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.832496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.832665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.832675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.832885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.832894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.833255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.833265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.833578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.833587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.833790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.833800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.834115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.834125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.834446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.834455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.834751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.834761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.834948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.834958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.835264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.835275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.835635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.835645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.835907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.835917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.836233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.836243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.836384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.836394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.836697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.836708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.837016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.837027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.837341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.837350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.837651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.837660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.837976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.837986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.838280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.838291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.838621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.838630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.838927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.838935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.839273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.839283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.839596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.839605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.839920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.839930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.840343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.840353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.840638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.840648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.840841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.840852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.841069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.841080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.841402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.580 [2024-12-07 11:50:31.841411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.580 qpair failed and we were unable to recover it. 00:38:32.580 [2024-12-07 11:50:31.841735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.841745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.842055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.842065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.842225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.842235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.842591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.842600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.842999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.843009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.843235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.843244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.843472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.843481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.843730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.843739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.844054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.844064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.844352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.844365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.844641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.844651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.844964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.844973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.845284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.845294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.845477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.845487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.845701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.845711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.845989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.845998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.846205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.846215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.846535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.846544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.846840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.846850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.847159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.847168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.847337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.847347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.847722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.847731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.848071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.848081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.848380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.848389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.848684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.848693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.848994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.849004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.849318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.849328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.849612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.849621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.849781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.849792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.850112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.850122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.850450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.850460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.850769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.850778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.851061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.851071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.851393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.851403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.851608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.851617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.851925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.851935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.852330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.852340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.852549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.581 [2024-12-07 11:50:31.852560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.581 qpair failed and we were unable to recover it. 00:38:32.581 [2024-12-07 11:50:31.852901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.852911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.853026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.853035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.853334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.853343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.853636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.853645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.853774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.853784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.854017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.854026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.854230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.854240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.854527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.854536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.854834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.854843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.855062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.855072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.855377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.855387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.855683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.855693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.855989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.856007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.856175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.856185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.856490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.856499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.856910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.856919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.857117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.857127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.857431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.857441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.857768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.857777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.857964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.857975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.858246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.858259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.858420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.858430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.858738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.858749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.859023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.859034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.859357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.859367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.859679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.859688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.860005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.860019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.860349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.860358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.860659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.860668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.860912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.860922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.861253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.861263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.861430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.861440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.861708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.861718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.862062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.862071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.862391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.862401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.862610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.862623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.862881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.862890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.582 [2024-12-07 11:50:31.863201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.582 [2024-12-07 11:50:31.863210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.582 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.863513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.863522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.863904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.863914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.864211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.864221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.864634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.864644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.865033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.865043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.865395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.865404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.865725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.865734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.865900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.865910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.866201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.866212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.866381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.866391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.866709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.866718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.866785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.866795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.866971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.866981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.867226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.867236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.867558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.867568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.867847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.867858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.868033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.868044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.868318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.868328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.868529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.868538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.868889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.868899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.869126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.869135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.869443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.869452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.869760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.869769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.870073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.870083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.870298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.870308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.870609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.870618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.870926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.870941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.871326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.871335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.871546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.871556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.871873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.871883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.872171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.872181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.872485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.872495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.872815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.872824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.873115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.873126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.873390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.873399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.873606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.873616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.873890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.873899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.583 [2024-12-07 11:50:31.874191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.583 [2024-12-07 11:50:31.874200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.583 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.874500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.874509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.874802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.874812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.874997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.875007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.875304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.875315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.875601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.875610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.875927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.875937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.876253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.876263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.876546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.876556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.876882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.876892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.877203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.877213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.877493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.877503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.877789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.877798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.877998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.878007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.878205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.878214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.878525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.878534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.878831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.878841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.879213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.879222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.879502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.879516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.879817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.879827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.880135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.880145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.880462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.880472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.880776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.880785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.881070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.881080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.881406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.881419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.881726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.881735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.882061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.882070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.882247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.882257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.882574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.882584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.882908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.882919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.883229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.883239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.883425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.883435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.883767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.883777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.884079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.884089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.584 qpair failed and we were unable to recover it. 00:38:32.584 [2024-12-07 11:50:31.884398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.584 [2024-12-07 11:50:31.884408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.884682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.884691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.884970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.884980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.885289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.885300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.885662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.885671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.885968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.885977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.886273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.886282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.886584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.886593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.886899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.886907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.887239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.887249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.887539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.887550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.887889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.887898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.888095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.888104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.888476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.888485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.888792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.888801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.889126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.889136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.889469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.889478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.889783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.889792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.890175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.890184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.890506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.890515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.890674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.890690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.891004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.891016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.891304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.891313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.891634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.891643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.891851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.891860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.892054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.892065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.892372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.892381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.892539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.892549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.892856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.892866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.893174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.893184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.893489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.893498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.893794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.893803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.894138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.894148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.894456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.894465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.894662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.894671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.895055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.895065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.895390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.895399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.895691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.585 [2024-12-07 11:50:31.895701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.585 qpair failed and we were unable to recover it. 00:38:32.585 [2024-12-07 11:50:31.895991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.586 [2024-12-07 11:50:31.896001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.586 qpair failed and we were unable to recover it. 00:38:32.586 [2024-12-07 11:50:31.896385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.586 [2024-12-07 11:50:31.896394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.586 qpair failed and we were unable to recover it. 00:38:32.586 [2024-12-07 11:50:31.896743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.586 [2024-12-07 11:50:31.896752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.586 qpair failed and we were unable to recover it. 00:38:32.586 [2024-12-07 11:50:31.897088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.586 [2024-12-07 11:50:31.897099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.586 qpair failed and we were unable to recover it. 00:38:32.586 [2024-12-07 11:50:31.897291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.586 [2024-12-07 11:50:31.897300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.586 qpair failed and we were unable to recover it. 00:38:32.586 [2024-12-07 11:50:31.897639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.586 [2024-12-07 11:50:31.897648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.586 qpair failed and we were unable to recover it. 00:38:32.586 [2024-12-07 11:50:31.897964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.586 [2024-12-07 11:50:31.897973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.586 qpair failed and we were unable to recover it. 00:38:32.586 [2024-12-07 11:50:31.898252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.586 [2024-12-07 11:50:31.898262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.586 qpair failed and we were unable to recover it. 00:38:32.586 [2024-12-07 11:50:31.898555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.586 [2024-12-07 11:50:31.898564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.586 qpair failed and we were unable to recover it. 00:38:32.586 [2024-12-07 11:50:31.898873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.586 [2024-12-07 11:50:31.898883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.586 qpair failed and we were unable to recover it. 00:38:32.862 [2024-12-07 11:50:31.899230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.862 [2024-12-07 11:50:31.899241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.862 qpair failed and we were unable to recover it. 00:38:32.862 [2024-12-07 11:50:31.899539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.862 [2024-12-07 11:50:31.899549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.862 qpair failed and we were unable to recover it. 00:38:32.862 [2024-12-07 11:50:31.899854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.862 [2024-12-07 11:50:31.899862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.862 qpair failed and we were unable to recover it. 00:38:32.862 [2024-12-07 11:50:31.900173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.862 [2024-12-07 11:50:31.900182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.862 qpair failed and we were unable to recover it. 00:38:32.862 [2024-12-07 11:50:31.900510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.862 [2024-12-07 11:50:31.900519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.862 qpair failed and we were unable to recover it. 00:38:32.862 [2024-12-07 11:50:31.900825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.862 [2024-12-07 11:50:31.900834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.862 qpair failed and we were unable to recover it. 00:38:32.862 [2024-12-07 11:50:31.901143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.862 [2024-12-07 11:50:31.901156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.862 qpair failed and we were unable to recover it. 00:38:32.862 [2024-12-07 11:50:31.901457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.862 [2024-12-07 11:50:31.901466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.862 qpair failed and we were unable to recover it. 00:38:32.862 [2024-12-07 11:50:31.901775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.862 [2024-12-07 11:50:31.901784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.862 qpair failed and we were unable to recover it. 00:38:32.862 [2024-12-07 11:50:31.902100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.862 [2024-12-07 11:50:31.902110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.862 qpair failed and we were unable to recover it. 00:38:32.862 [2024-12-07 11:50:31.902430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.902440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.902719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.902729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.902982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.902992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.903196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.903206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.903515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.903524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.903828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.903838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.904163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.904172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.904456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.904466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.904749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.904757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.904956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.904965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.905175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.905185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.905489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.905498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.905786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.905795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.905992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.906002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.906230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.906239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.906550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.906559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.906887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.906897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.907216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.907226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.907513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.907523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.907838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.907849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.908156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.908166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.908450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.908459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.908616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.908627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.908821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.908831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.909102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.909112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.909445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.909455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.909744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.909760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.910047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.910056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.910346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.910364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.910670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.910679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.910960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.910977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.911126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.911137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.911443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.911452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.911761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.911771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.912075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.912085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.912295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.912305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.912628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.863 [2024-12-07 11:50:31.912637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.863 qpair failed and we were unable to recover it. 00:38:32.863 [2024-12-07 11:50:31.912931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.912940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.913229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.913239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.913531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.913542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.913850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.913859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.914141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.914150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.914487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.914496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.914666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.914676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.914956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.914965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.915188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.915198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.915527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.915537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.915880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.915889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.916058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.916069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.916386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.916395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.916708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.916717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.917023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.917033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.917345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.917355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.917700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.917709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.917905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.917914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.918088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.918098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.918403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.918412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.918720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.918729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.919016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.919025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.919336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.919348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.919649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.919673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.919974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.919984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.920280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.920289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.920598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.920607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.920937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.920946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.921261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.921271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.921575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.921584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.921864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.921873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.922228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.922238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.922613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.922623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.922934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.922943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.923299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.923309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.923625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.923634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.923938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.923948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.924261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.864 [2024-12-07 11:50:31.924271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.864 qpair failed and we were unable to recover it. 00:38:32.864 [2024-12-07 11:50:31.924560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.924577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.924863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.924873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.925179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.925188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.925473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.925482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.925690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.925699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.925979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.925989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.926152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.926163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.926461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.926470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.926766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.926776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.927094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.927103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.927427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.927436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.927744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.927753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.928036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.928046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.928294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.928303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.928558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.928568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.928890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.928899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.929175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.929184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.929500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.929509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.929836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.929845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.930158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.930168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.930486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.930495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.930803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.930812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.931094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.931103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.931386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.931401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.931740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.931756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.932052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.932062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.932364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.932372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.932645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.932654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.932983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.932992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.933282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.933292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.933604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.933613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.933907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.933917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.934230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.934240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.934412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.934424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.934737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.934746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.935063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.935073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.935350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.935359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.865 [2024-12-07 11:50:31.935652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.865 [2024-12-07 11:50:31.935661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.865 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.935990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.936000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.936337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.936347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.936654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.936663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.936849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.936859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.937196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.937205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.937518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.937527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.937834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.937843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.938127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.938137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.938461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.938470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.938778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.938787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.938958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.938967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.939248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.939261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.939570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.939580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.939884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.939893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.940196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.940206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.940517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.940526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.940861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.940869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.941165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.941175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.941482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.941491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.941786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.941804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.942144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.942154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.942435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.942450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.942715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.942723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.943106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.943116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.943423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.943432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.943715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.943725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.944016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.944027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.944331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.944340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.944641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.944649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.944933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.944943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.945239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.945248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.945553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.945563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.945891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.945900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.946187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.946197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.946577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.946586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.946905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.946914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.947112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.866 [2024-12-07 11:50:31.947122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.866 qpair failed and we were unable to recover it. 00:38:32.866 [2024-12-07 11:50:31.947470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.947479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.947762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.947772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.948075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.948085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.948424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.948433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.948644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.948653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.948960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.948969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.949353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.949363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.949675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.949684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.949990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.949999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.950324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.950334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.950636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.950645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.950953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.950961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.951232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.951241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.951412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.951421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.951751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.951760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.951947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.951957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.952151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.952161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.952429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.952438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.952617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.952627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.952991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.953000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.953306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.953316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.953587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.953597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.953782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.953793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.954008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.954021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.954332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.954342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.954662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.954671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.954980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.954989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.955293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.955303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.955592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.955602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.955909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.955920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.956225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.956234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.956532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.956541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.867 [2024-12-07 11:50:31.956848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.867 [2024-12-07 11:50:31.956857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.867 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.957163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.957173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.957457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.957466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.957773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.957782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.958079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.958089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.958380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.958391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.958705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.958719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.959046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.959056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.959347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.959355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.959660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.959669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.959874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.959883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.960200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.960209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.960537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.960546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.960852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.960861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.961027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.961038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.961342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.961351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.961636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.961646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.962029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.962038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.962319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.962329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.962622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.962631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.962941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.962950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.963250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.963260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.963565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.963574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.963918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.963927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.964192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.964201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.964509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.964518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.964826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.964836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.965118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.965127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.965468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.965477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.965782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.965791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.965985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.965994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.966283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.966293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.966598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.966608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.966783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.966793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.967055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.967065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.967371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.967380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.967683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.967693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.967998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.968008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.868 qpair failed and we were unable to recover it. 00:38:32.868 [2024-12-07 11:50:31.968347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.868 [2024-12-07 11:50:31.968357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.968638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.968647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.968858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.968867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.969161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.969171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.969486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.969495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.969802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.969811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.970114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.970124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.970296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.970305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.970625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.970634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.970940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.970949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.971259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.971269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.971637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.971646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.971929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.971939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.972260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.972269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.972468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.972478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.972655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.972666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.973041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.973050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.973367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.973376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.973670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.973680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.973985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.973995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.974163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.974173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.974474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.974483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.974804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.974814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.974963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.974973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.975268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.975277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.975481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.975490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.975823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.975833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.976142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.976151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.976442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.976458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.976755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.976764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.977057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.977067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.977389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.977399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.977680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.977695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.978008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.978020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.978291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.978300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.978614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.978623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.978928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.978937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.979276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.869 [2024-12-07 11:50:31.979286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.869 qpair failed and we were unable to recover it. 00:38:32.869 [2024-12-07 11:50:31.979585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.979595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.979917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.979928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.980239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.980249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.980556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.980565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.980871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.980881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.981215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.981226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.981540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.981550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.981733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.981742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.982033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.982042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.982394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.982404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.982713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.982722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.982903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.982914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.983123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.983133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.983328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.983337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.983669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.983679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.983992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.984002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.984311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.984320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.984706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.984716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.984888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.984897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.985199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.985210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.985530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.985540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.985755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.985765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.985969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.985979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.986306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.986317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.986625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.986635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.986938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.986948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.987247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.987257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.987562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.987572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.987878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.987888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.988190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.988199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.988504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.988514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.988821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.988831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.989137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.989147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.989460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.989469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.989752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.989762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.990067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.990076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.990360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.990377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.990552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.870 [2024-12-07 11:50:31.990562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.870 qpair failed and we were unable to recover it. 00:38:32.870 [2024-12-07 11:50:31.990854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.990863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.991162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.991172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.991487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.991496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.991801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.991813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.992122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.992131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.992435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.992445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.992751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.992760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.993067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.993077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.993392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.993401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.993686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.993696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.994013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.994023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.994299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.994308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.994604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.994613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.994921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.994930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.995235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.995245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.995527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.995537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.995818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.995827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.996133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.996144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.996459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.996469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.996769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.996778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.997072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.997086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.997393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.997401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.997592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.997601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.997929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.997938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.998234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.998243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.998561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.998570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.998857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.998866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.999170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.999180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.999460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.999469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:31.999724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:31.999734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:32.000045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:32.000054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:32.000415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:32.000424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.871 [2024-12-07 11:50:32.000717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.871 [2024-12-07 11:50:32.000726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.871 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.001019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.001029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.001264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.001274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.001596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.001605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.001913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.001922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.002276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.002286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.002588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.002598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.002894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.002903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.003179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.003190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.003506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.003516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.003799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.003809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.004137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.004149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.004449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.004459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.004765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.004775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.005079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.005088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.005394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.005403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.005473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.005482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.005765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.005774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.006075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.006084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.006369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.006378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.006556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.006566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.006874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.006883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.007190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.007200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.007515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.007523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.007823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.007832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.008029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.008040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.008355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.008364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.008669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.008679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.008982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.008991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.009283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.009293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.009509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.009518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.009791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.009800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.010157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.010167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.010465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.010475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.010749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.010758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.011074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.011083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.011397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.011406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.011747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.872 [2024-12-07 11:50:32.011757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.872 qpair failed and we were unable to recover it. 00:38:32.872 [2024-12-07 11:50:32.012063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.012073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.012388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.012398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.012702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.012711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.013001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.013015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.013312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.013321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.013629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.013638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.013960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.013969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.014130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.014140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.014411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.014420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.014748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.014757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.015060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.015070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.015258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.015267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.015590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.015599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.015910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.015921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.016232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.016245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.016534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.016544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.016863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.016873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.017184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.017195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.017500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.017509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.017799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.017808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.018127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.018137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.018321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.018331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.018646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.018656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.018954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.018963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.019271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.019280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.019494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.019503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.019817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.019826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.020139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.020148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.020454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.020463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.020774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.020783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.021064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.021074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.021392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.021400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.873 [2024-12-07 11:50:32.021708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.873 [2024-12-07 11:50:32.021717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.873 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.022022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.022031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.022338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.022348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.022637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.022646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.022951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.022961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.023244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.023253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.023560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.023569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.023731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.023741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.023914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.023924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.024209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.024218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.024537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.024546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.024829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.024839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.025145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.025154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.025313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.025323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.025595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.025604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.025908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.025917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.026304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.026313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.026620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.026629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.026926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.026935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.027234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.027243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.027427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.027437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.027755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.027766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.028088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.028098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.028410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.028419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.028601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.028610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.028977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.028986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.029295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.029305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.029615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.029624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.029841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.029850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.030157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.030166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.030444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.030454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.030749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.030758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.031060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.031070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.031390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.031399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.031706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.031715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.032041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.032051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.032259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.032268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.874 [2024-12-07 11:50:32.032594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.874 [2024-12-07 11:50:32.032603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.874 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.032889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.032898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.033177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.033187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.033512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.033522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.033844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.033853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.034157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.034167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.034466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.034475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.034777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.034786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.035067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.035077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.035388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.035400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.035696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.035705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.036030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.036039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.036418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.036427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.036720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.036729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.037018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.037028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.037310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.037319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.037626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.037636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.037944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.037953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.038271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.038281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.038585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.038594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.038900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.038909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.039222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.039232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.039531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.039541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.039847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.039856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.040157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.040168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.040379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.040389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.040689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.040698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.041008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.041024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.041331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.041341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.041622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.041639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.041976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.041985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.042290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.042300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.042604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.042613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.875 qpair failed and we were unable to recover it. 00:38:32.875 [2024-12-07 11:50:32.042921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.875 [2024-12-07 11:50:32.042931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.043226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.043236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.043542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.043551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.043856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.043865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.044172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.044182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.044502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.044511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.044808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.044817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.045131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.045140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.045444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.045453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.045566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.045576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.045889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.045899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.046233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.046243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.046547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.046557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.046851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.046861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.047169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.047179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.047499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.047517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.047811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.047821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.048120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.048131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.048443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.048454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.048759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.048768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.049082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.049092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.049409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.049418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.049595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.049604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.049965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.049975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.050286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.050296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.050602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.050611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.050917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.050927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.051227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.051238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.051545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.051555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.051842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.051852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.052036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.052046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.052217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.052227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.052527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.876 [2024-12-07 11:50:32.052536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.876 qpair failed and we were unable to recover it. 00:38:32.876 [2024-12-07 11:50:32.052829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.052838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.053155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.053164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.053450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.053459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.053638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.053649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.053954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.053963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.054373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.054383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.054690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.054700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.055040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.055054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.055377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.055386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.055694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.055703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.056012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.056022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.056351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.056360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.056652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.056661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.056969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.056978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.057286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.057295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.057605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.057614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.057914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.057923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.058239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.058250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.058555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.058564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.058846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.058856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.059054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.059064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.059360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.059369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.059655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.059671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.059980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.059989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.060350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.060359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.060659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.060670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.060997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.061006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.061355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.061364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.061658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.061667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.061975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.061985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.062339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.062349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.877 qpair failed and we were unable to recover it. 00:38:32.877 [2024-12-07 11:50:32.062513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.877 [2024-12-07 11:50:32.062523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.062831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.062842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.063191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.063201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.063361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.063371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.063637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.063646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.063929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.063938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.064235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.064245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.064548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.064557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.064946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.064956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.065265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.065275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.065583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.065592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.065887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.065896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.066201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.066211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.066589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.066598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.066899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.066908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.067123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.067133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.067236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.067246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.067578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.067587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.067921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.067930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.068240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.068250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.068531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.068540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.068847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.068857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.069145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.069155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.069473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.069482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.069843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.069852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.070118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.070128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.070349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.070358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.070651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.070660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.070934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.070943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.071131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.071141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.071465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.071474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.071782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.071792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.072080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.072091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.072404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.072413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.072715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.072726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.073037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.073047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.073339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.073348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.878 [2024-12-07 11:50:32.073665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.878 [2024-12-07 11:50:32.073675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.878 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.074060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.074070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.074407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.074417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.074602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.074616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.074998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.075007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.075345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.075355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.075656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.075665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.075949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.075958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.076275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.076284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.076618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.076628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.076944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.076953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.077304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.077314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.077638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.077647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.077947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.077956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.078166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.078176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.078506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.078515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.078810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.078820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.079142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.079152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.079496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.079505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.079850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.079859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.080153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.080162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.080325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.080335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.080516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.080526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.080726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.080735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.081087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.081097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.081387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.081396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.081779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.081788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.081996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.082006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.082303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.082313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.082613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.082622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.082942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.082952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.083301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.083311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.083629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.083638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.879 [2024-12-07 11:50:32.083934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.879 [2024-12-07 11:50:32.083943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.879 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.084400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.084410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.084579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.084589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.084778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.084788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.085078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.085090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.085394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.085403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.085727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.085736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.086043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.086053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.086407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.086416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.086750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.086760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.087088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.087097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.087414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.087424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.087713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.087722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.088044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.088054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.088351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.088360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.088657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.088667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.088960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.088969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.089255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.089265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.089571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.089581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.089898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.089907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.090229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.090239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.090401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.090411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.090795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.090805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.091080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.091089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.091373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.091383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.091578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.091588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.091850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.091860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.092045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.092056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.092336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.092345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.092545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.092554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.092898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.092907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.093228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.093239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.093566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.093575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.093846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.093856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.094139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.094152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.094466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.094475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.094813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.094823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.880 qpair failed and we were unable to recover it. 00:38:32.880 [2024-12-07 11:50:32.095132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.880 [2024-12-07 11:50:32.095142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.095438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.095447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.095750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.095760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.096049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.096058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.096361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.096371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.096677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.096686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.096875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.096885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.097052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.097066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.097365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.097374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.097547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.097557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.097835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.097845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.098125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.098134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.098521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.098530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.098841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.098850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.099165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.099175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.099486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.099495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.099793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.099802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.100021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.100031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.100350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.100359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.100526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.100535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.100776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.100785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.100971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.100980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.101320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.101330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.101678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.101687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.102037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.102047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.102411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.102420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.102725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.102735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.103045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.103054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.103418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.103429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.103733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.103742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.104022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.104032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.104327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.881 [2024-12-07 11:50:32.104336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.881 qpair failed and we were unable to recover it. 00:38:32.881 [2024-12-07 11:50:32.104662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.104671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.104912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.104921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.105113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.105123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.105330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.105340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.105664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.105674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.105866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.105875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.106114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.106124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.106331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.106340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.106627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.106636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.106960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.106969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.107260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.107270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.107580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.107591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.107900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.107910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.108247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.108257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.108460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.108470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.108692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.108703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.108889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.108899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.109215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.109226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.109509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.109519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.109809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.109819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.109862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.109872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.110173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.110184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.110503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.110513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.110705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.110715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.111004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.111018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.111294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.111304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.111596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.111605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.111918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.111927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.112119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.112130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.112470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.112483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.112819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.112828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.113144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.113154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.113319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.113328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.113638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.113647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.113958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.113967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.114274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.114283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.114569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.114578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.882 [2024-12-07 11:50:32.114914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.882 [2024-12-07 11:50:32.114923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.882 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.115273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.115283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.115472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.115481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.115803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.115812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.116127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.116136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.116481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.116490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.116839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.116848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.117140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.117150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.117454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.117463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.117764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.117780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.117848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.117857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.118126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.118136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.118466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.118475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.118676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.118685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.118996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.119005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.119283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.119293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.119589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.119598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.119911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.119920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.120182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.120193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.120515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.120524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.120816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.120826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.121134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.121143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.121437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.121452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.121749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.121758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.122051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.122060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.122378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.122387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.122692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.122701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.123016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.123026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.123331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.123340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.123651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.123660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.123984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.123993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.124328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.124338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.124519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.124528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.124830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.124839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.125052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.125062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.125393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.125402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.125562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.125572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.125770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.125780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.883 [2024-12-07 11:50:32.126082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.883 [2024-12-07 11:50:32.126095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.883 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.126296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.126305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.126611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.126620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.126912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.126922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.127211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.127221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.127512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.127528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.127870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.127879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.128192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.128202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.128497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.128506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.128701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.128710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.129033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.129043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.129334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.129343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.129552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.129561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.129744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.129753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.130020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.130030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.130345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.130354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.130667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.130676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.130956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.130966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.131309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.131322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.131633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.131642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.131995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.132007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.132314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.132324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.132631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.132641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.132926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.132936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.133215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.133226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.133545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.133555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.133888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.133898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.134093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.134103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.134423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.134433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.134738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.134748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.135049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.135059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.135245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.135255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.135460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.135470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.135825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.135835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.136043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.136052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.136270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.136279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.136559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.136568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.136880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.884 [2024-12-07 11:50:32.136889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.884 qpair failed and we were unable to recover it. 00:38:32.884 [2024-12-07 11:50:32.137202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.137212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.137503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.137513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.137820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.137829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.138245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.138255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.138578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.138588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.138868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.138877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.139050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.139061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.139301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.139311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.139494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.139503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.139810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.139820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.140135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.140144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.140437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.140447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.140734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.140743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.141053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.141062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.141392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.141401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.141716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.141725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.142121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.142130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.142485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.142494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.142798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.142807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.143148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.143157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.143450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.143460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.143654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.143664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.143979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.143989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.144311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.144321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.144614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.144623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.144931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.144940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.145229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.145238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.145549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.145559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.145852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.145861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.146161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.146170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.146489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.146498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.146778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.146788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.146978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.146987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.147305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.147314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.147614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.147623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.147948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.147957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.148278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.148288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.885 [2024-12-07 11:50:32.148584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.885 [2024-12-07 11:50:32.148593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.885 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.148917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.148927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.149243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.149253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.149549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.149559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.149869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.149879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.150181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.150191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.150353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.150363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.150683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.150696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.151013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.151023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.151296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.151305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.151460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.151469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.151853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.151862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.152153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.152163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.152421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.152430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.152707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.152716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.153050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.153060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.153326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.153335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.153525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.153535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.153856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.153865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.154220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.154230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.154550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.154559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.154829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.154838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.155141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.155151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.155451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.155461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.155778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.155787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.156073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.156084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.156399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.156408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.156714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.156724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.157040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.157050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.157360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.157370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.157677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.157687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.157977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.157987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.158292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.158302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.158471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.886 [2024-12-07 11:50:32.158481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.886 qpair failed and we were unable to recover it. 00:38:32.886 [2024-12-07 11:50:32.158749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.158759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.159046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.159056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.159364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.159373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.159677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.159686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.159997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.160006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.160314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.160324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.160550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.160559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.160848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.160857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.161160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.161169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.161460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.161469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.161778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.161787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.162093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.162102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.162419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.162428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.162722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.162731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.163045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.163055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.163442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.163451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.163754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.163763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.164046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.164056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.164385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.164394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.164663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.164685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.165002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.165018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.165193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.165203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.165513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.165522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.165843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.165852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.166163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.166173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.166537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.166546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.166871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.166880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.167185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.167194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.167524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.167534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.167858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.167868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.168222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.168232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.168277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.168289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.168471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.168481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.168798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.168807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.169112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.169122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.169443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.169452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.169777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.169788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.887 qpair failed and we were unable to recover it. 00:38:32.887 [2024-12-07 11:50:32.170169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.887 [2024-12-07 11:50:32.170182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.170356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.170366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.170696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.170705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.171000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.171013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.171358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.171367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.171668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.171678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.171982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.171991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.172264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.172273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.172586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.172595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.172879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.172889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.173189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.173199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.173493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.173503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.173813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.173822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.174116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.174126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.174448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.174457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.174757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.174767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.175073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.175082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.175370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.175379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.175562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.175572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.175856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.175866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.176177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.176187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.176473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.176484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.176791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.176800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.176988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.176997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.177189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.177199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.177506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.177515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.177822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.177831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.178154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.178164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.178456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.178465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.178754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.178762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.179092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.179102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.179417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.179426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.179741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.179750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.179955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.179964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.180168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.180180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.180549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.180558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.180876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.180885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.181161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.888 [2024-12-07 11:50:32.181170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.888 qpair failed and we were unable to recover it. 00:38:32.888 [2024-12-07 11:50:32.181355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.181363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.181520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.181531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.181726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.181735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.182039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.182049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.182356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.182365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.182672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.182681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.182874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.182883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.183217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.183226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.183543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.183552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.183734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.183743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.184061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.184071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.184376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.184386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.184705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.184714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.185022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.185032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.185191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.185202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.185503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.185512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.185739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.185748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.186056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.186066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.186454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.186463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.186768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.186777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.187073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.187082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.187389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.187398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.187705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.187714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.187876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.187886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.188145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.188154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.188468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.188478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.188778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.188791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.189075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.189085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.189277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.189287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.189442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.189452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.189777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.189785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.190075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.190085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.190400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.190409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.190710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.190720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.190921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.190930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.191144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.191154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.191482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.191494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.191797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.889 [2024-12-07 11:50:32.191807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.889 qpair failed and we were unable to recover it. 00:38:32.889 [2024-12-07 11:50:32.191966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.191976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.192201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.192211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.192500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.192510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.192815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.192825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.193137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.193146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.193483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.193492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.193798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.193807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.194110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.194120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.194416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.194433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.194755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.194764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.195147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.195156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.195419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.195429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.195750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.195759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.196085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.196095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.196311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.196320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.196625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.196634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.196947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.196957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.197245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.197255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.197597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.197606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.197900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.197910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.198183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.198192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:32.890 [2024-12-07 11:50:32.198486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.890 [2024-12-07 11:50:32.198503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:32.890 qpair failed and we were unable to recover it. 00:38:33.165 [2024-12-07 11:50:32.198805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.165 [2024-12-07 11:50:32.198815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.165 qpair failed and we were unable to recover it. 00:38:33.165 [2024-12-07 11:50:32.199020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.165 [2024-12-07 11:50:32.199031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.165 qpair failed and we were unable to recover it. 00:38:33.165 [2024-12-07 11:50:32.199253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.165 [2024-12-07 11:50:32.199262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.165 qpair failed and we were unable to recover it. 00:38:33.165 [2024-12-07 11:50:32.199574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.165 [2024-12-07 11:50:32.199584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.165 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.199892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.199901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.200118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.200127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.200443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.200452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.200640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.200660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.200969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.200979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.201283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.201293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.201605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.201615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.201912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.201925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.202116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.202126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.202427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.202437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.202743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.202752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.203041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.203051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.203348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.203357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.203669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.203679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.204006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.204019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.204269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.204278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.204585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.204594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.204899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.204908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.205207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.205217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.205504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.205520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.205717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.205726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.206020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.206029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.206358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.206367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.206656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.206665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.206946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.206955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.207342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.207352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.207648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.207661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.207989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.207999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.208317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.208327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.208623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.208633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.208935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.208944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.209304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.209314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.209626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.209636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.209814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.209823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.210119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.210129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.210450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.166 [2024-12-07 11:50:32.210460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.166 qpair failed and we were unable to recover it. 00:38:33.166 [2024-12-07 11:50:32.210734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.210743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.211063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.211072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.211350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.211359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.211658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.211669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.211991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.212000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.212286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.212296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.212585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.212595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.212912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.212921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.213234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.213244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.213550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.213559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.213722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.213732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.214108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.214118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.214437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.214446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.214751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.214760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.215074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.215084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.215401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.215410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.215720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.215729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.216055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.216064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.216349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.216358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.216543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.216553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.216834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.216843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.217171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.217181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.217490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.217499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.217884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.217893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.218186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.218196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.218521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.218530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.218812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.218827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.219032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.219043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.219256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.219266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.219544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.219553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.219773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.219783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.219943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.219952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.220248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.220258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.220529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.220538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.220952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.220961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.221274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.221284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.221608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.221617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.167 [2024-12-07 11:50:32.221926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.167 [2024-12-07 11:50:32.221935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.167 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.222250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.222259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.222549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.222558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.222864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.222873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.223186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.223195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.223511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.223520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.223690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.223703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.224014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.224023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.224330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.224340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.224664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.224674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.225003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.225016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.225288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.225298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.225617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.225627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.225933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.225943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.226261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.226270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.226586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.226595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.226905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.226914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.227150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.227174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.227469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.227478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.227789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.227799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.227998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.228007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.228300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.228310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.228471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.228481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.228841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.228850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.229158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.229168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.229433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.229442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.229772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.229781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.229978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.229987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.230291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.230300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.230608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.230617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.230907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.230916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.231243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.231252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.231568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.231577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.231884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.231893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.232183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.232192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.232514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.232523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.232697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.232706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.232899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.168 [2024-12-07 11:50:32.232908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.168 qpair failed and we were unable to recover it. 00:38:33.168 [2024-12-07 11:50:32.233106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.233115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.233294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.233303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.233628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.233638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.233979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.233989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.234294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.234304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.234587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.234597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.234787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.234797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.234961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.234972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.235211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.235223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.235514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.235524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.235833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.235843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.236156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.236165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.236467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.236476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.236772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.236781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.237065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.237074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.237386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.237395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.237700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.237709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.238073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.238083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.238371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.238380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.238686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.238695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.238989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.238998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.239306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.239316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.239623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.239632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.239930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.239939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.240257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.240269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.240573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.240582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.240891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.240901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.241182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.241191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.241490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.241500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.241804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.241814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.242123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.242133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.242291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.242301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.242541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.242550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.242717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.242727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.242940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.242949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.243318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.243327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.243611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.243621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.169 [2024-12-07 11:50:32.243781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.169 [2024-12-07 11:50:32.243790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.169 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.243958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.243968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.244222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.244232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.244430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.244440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.244726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.244735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.245049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.245059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.245352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.245365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.245647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.245656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.245947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.245956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.246264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.246273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.246588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.246597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.246906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.246917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.247226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.247235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.247511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.247520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.247850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.247860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.248157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.248167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.248474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.248484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.248830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.248839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.249121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.249130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.249433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.249442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.249781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.249790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.250142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.250152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.250457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.250466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.250851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.250860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.251156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.251166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.251485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.251494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.251800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.251810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.252117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.252126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.252329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.252338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.252621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.252631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.252939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.252948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.253230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.253245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.253542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.170 [2024-12-07 11:50:32.253552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.170 qpair failed and we were unable to recover it. 00:38:33.170 [2024-12-07 11:50:32.253855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.253864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.254019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.254029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.254263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.254274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.254552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.254561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.254886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.254895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.255223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.255232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.255519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.255529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.255826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.255835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.256133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.256143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.256467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.256476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.256706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.256715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.257023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.257032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.257336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.257345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.257555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.257565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.257913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.257922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.258234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.258244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.258563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.258572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.258735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.258745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.259094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.259107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.259405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.259414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.259728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.259738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.260047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.260056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.260346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.260355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.260653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.260663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.260968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.260978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.261289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.261298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.261395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.261404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.261660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.261669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.261989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.261999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.262178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.262188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.262508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.262517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.262825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.262835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.263020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.263031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.263198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.263208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.263546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.263556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.263861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.263870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.264250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.264260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.171 [2024-12-07 11:50:32.264585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.171 [2024-12-07 11:50:32.264598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.171 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.264905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.264915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.265226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.265236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.265537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.265547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.265855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.265864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.266183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.266193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.266504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.266513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.266699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.266708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.267031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.267042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.267347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.267356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.267689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.267698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.268001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.268012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.268219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.268229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.268544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.268553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.268933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.268942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.269242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.269252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.269575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.269584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.269891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.269900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.270226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.270235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.270526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.270540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.270752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.270762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.270938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.270949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.271316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.271325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.271637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.271646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.271953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.271963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.272265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.272275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.272578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.272587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.272873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.272882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.273099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.273108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.273276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.273286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.273574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.273583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.273919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.273929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.274205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.274215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.274550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.274559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.274863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.274872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.275168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.275178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.275487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.275496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.275765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.275774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.172 [2024-12-07 11:50:32.275964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.172 [2024-12-07 11:50:32.275974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.172 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.276267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.276276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.276581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.276590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.276881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.276891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.277183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.277192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.277488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.277498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.277648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.277657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.277968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.277977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.278292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.278301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.278584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.278598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.278903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.278912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.279118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.279128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.279459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.279468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.279653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.279663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.279903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.279912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.280185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.280195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.280505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.280514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.280889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.280898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.281209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.281218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.281502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.281512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.281819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.281828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.282116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.282125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.282514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.282523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.282828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.282839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.283149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.283159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.283473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.283482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.283789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.283803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.283991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.284002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.284176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.284186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.284401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.284410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.284774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.284784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.284991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.285001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.285397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.285406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.285699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.285708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.286002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.286014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.286280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.286289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.286598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.286607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.286935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.286944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.173 [2024-12-07 11:50:32.287216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.173 [2024-12-07 11:50:32.287226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.173 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.287539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.287548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.287856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.287865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.288166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.288175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.288469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.288483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.288785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.288794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.289103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.289112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.289424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.289433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.289789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.289798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.290095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.290105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.290435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.290444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.290735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.290744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.291050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.291060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.291396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.291405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.291719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.291728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.292032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.292041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.292354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.292363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.292666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.292675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.292981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.292990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.293158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.293168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.293493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.293502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.293811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.293820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.294037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.294048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.294355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.294363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.294531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.294541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.294771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.294783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.295069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.295079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.295369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.295378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.295689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.295697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.295846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.295856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.296123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.296132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.296461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.296470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.296761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.296771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.297085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.297094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.297402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.297411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.297755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.297765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.298068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.298078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.298385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.298394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.174 qpair failed and we were unable to recover it. 00:38:33.174 [2024-12-07 11:50:32.298701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.174 [2024-12-07 11:50:32.298711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.299022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.299033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.299360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.299369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.299707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.299715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.300021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.300030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.300313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.300322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.300651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.300660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.300964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.300973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.301281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.301290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.301595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.301604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.301762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.301772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.302036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.302046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.302362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.302371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.302680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.302689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.302993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.303007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.303339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.303349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.303657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.303666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.304042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.304051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.304339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.304348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.304651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.304660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.304968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.304977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.305143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.305153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.305464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.305473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.305673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.305682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.305998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.306006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.306296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.306306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.306602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.306611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.306924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.306935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.307263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.307272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.307577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.307586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.307881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.307890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.308203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.308213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.308530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.308539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.308816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.308826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.175 qpair failed and we were unable to recover it. 00:38:33.175 [2024-12-07 11:50:32.309194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.175 [2024-12-07 11:50:32.309204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.309414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.309423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.309737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.309754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.310055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.310064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.310257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.310267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.310594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.310603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.310918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.310927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.311232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.311242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.311416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.311426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.311697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.311707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.312034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.312044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.312368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.312377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.312667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.312676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.312974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.312984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.313295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.313305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.313622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.313631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.313926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.313935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.314262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.314271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.314581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.314590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.314893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.314902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.315118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.315129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.315458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.315467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.315770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.315779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.316063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.316073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.316278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.316287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.316605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.316615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.316924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.316934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.317280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.317291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.317580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.317589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.317697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.317706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.318018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.318029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.318308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.318317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.318615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.318624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.318950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.318961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.319133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.319144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.319474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.319483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.319766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.319775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.320091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.176 [2024-12-07 11:50:32.320101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.176 qpair failed and we were unable to recover it. 00:38:33.176 [2024-12-07 11:50:32.320267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.320276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.320597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.320606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.320909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.320918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.321230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.321240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.321547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.321557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.321916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.321925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.322299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.322312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.322593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.322602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.322921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.322931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.323253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.323263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.323566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.323575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.323789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.323798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.324085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.324095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.324423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.324432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.324721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.324729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.324928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.324937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.325248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.325258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.325663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.325671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.325982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.325991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.326286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.326295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.326602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.326612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.326923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.326932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.327111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.327121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.327485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.327495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.327800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.327810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.328127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.328138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.328331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.328342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.328662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.328671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.328976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.328985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.329303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.329312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.329601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.329611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.329817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.329827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.330030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.330040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.330306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.330315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.330616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.330626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.330935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.330946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.331239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.331255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.331435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.177 [2024-12-07 11:50:32.331445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.177 qpair failed and we were unable to recover it. 00:38:33.177 [2024-12-07 11:50:32.331752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.331761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.332051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.332061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.332425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.332435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.332742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.332751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.333042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.333051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.333381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.333391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.333696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.333705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.334075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.334084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.334280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.334289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.334603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.334612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.334933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.334942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.335261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.335271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.335554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.335564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.335875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.335885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.336179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.336188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.336399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.336409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.336713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.336723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.337030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.337040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.337351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.337360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.337669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.337679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.337860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.337870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.338066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.338076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.338339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.338349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.338545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.338554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.338868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.338878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.339193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.339203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.339525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.339535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.339722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.339732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.340044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.340054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.340347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.340356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.340675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.340684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.341072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.341082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.341375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.341388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.341695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.341704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.342014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.342024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.342337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.342346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.342531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.342541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.342758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.178 [2024-12-07 11:50:32.342771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.178 qpair failed and we were unable to recover it. 00:38:33.178 [2024-12-07 11:50:32.343179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.343189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.343470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.343479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.343653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.343663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.343929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.343938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.344268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.344278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.344580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.344590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.344895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.344905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.345176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.345186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.345424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.345435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.345767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.345777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.346105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.346116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.346433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.346442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.346753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.346763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.347066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.347076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.347360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.347371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.347677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.347687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.347873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.347882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.348174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.348183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.348477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.348486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.348804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.348813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.349118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.349134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.349449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.349459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.349771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.349780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.350111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.350120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.350431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.350440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.350755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.350764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.350928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.350939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.351293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.351302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.351617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.351626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.351940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.351950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.352129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.352140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.352365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.352375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.352702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.352711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.179 [2024-12-07 11:50:32.352908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.179 [2024-12-07 11:50:32.352918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.179 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.353250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.353261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.353574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.353583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.353891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.353900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.354221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.354231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.354522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.354532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.354844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.354853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.355246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.355256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.355456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.355465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.355759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.355768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.356069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.356079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.356401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.356410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.356697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.356706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.357034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.357044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.357275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.357284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.357608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.357618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.357932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.357942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.358115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.358126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.358331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.358340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.358653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.358662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.358978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.358988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.359207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.359217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.359496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.359505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.359674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.359684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.360051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.360061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.360386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.360398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.360566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.360577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.360868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.360878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.361166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.361176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.361494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.361503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.361761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.361770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.362093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.362103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.362386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.362395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.362724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.362736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.363047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.363058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.363366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.363376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.363539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.363549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.363830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.363840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.180 qpair failed and we were unable to recover it. 00:38:33.180 [2024-12-07 11:50:32.364153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.180 [2024-12-07 11:50:32.364163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.364465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.364474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.364775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.364784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.365074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.365083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.365286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.365295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.365406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.365414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.365702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.365711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.366021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.366031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.366349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.366362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.366667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.366676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.366959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.366968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.367290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.367299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.367484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.367493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.367806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.367815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.368050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.368060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.368400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.368409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.368724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.368734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.369062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.369072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.369377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.369387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.369695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.369704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.369925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.369934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.370231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.370241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.370556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.370565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.370876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.370885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.371223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.371232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.371593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.371602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.371910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.371919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.372082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.372093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.372448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.372457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.372793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.372802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.372987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.372996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.373298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.373307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.373637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.373646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.373951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.373961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.374294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.374304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.374588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.374599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.374883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.374892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.375080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.181 [2024-12-07 11:50:32.375089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.181 qpair failed and we were unable to recover it. 00:38:33.181 [2024-12-07 11:50:32.375456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.375465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.375867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.375877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.376060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.376070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.376424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.376434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.376745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.376754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.377065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.377075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.377397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.377406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.377776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.377786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.377967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.377978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.378281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.378290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.378573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.378582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.378895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.378905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.379139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.379150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.379471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.379485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.379670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.379680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.379873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.379883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.380123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.380133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.380533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.380543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.380835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.380843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.381235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.381244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.381550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.381559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.381864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.381873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.382209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.382219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.382539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.382548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.382832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.382841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.383145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.383155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.383451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.383461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.383762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.383771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.384068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.384078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.384405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.384413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.384721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.384730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.385029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.385038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.385346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.385356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.385658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.385668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.385972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.385981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.386163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.386174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.386594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.386603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.386759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.182 [2024-12-07 11:50:32.386771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.182 qpair failed and we were unable to recover it. 00:38:33.182 [2024-12-07 11:50:32.386922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.386932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.387245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.387255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.387637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.387647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.387955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.387964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.388304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.388314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.388644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.388654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.388963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.388973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.389252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.389262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.389563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.389572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.389852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.389862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.390244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.390254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.390553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.390563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.390866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.390876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.391190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.391200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.391502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.391512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.391809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.391819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.392160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.392171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.392468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.392478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.392679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.392689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.393031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.393040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.393351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.393368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.393669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.393678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.393984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.393993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.394292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.394301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.394625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.394635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.394946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.394955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.395397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.395407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.395688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.395698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.395991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.396000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.396186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.396197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.396541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.396551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.396861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.396870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.397174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.397184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.397522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.397532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.397690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.397700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.398016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.398026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.398232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.398241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.398575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.183 [2024-12-07 11:50:32.398584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.183 qpair failed and we were unable to recover it. 00:38:33.183 [2024-12-07 11:50:32.398891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.398901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.399208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.399222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.399509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.399518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.399812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.399821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.400103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.400113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.400431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.400440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.400746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.400755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.401064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.401073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.401365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.401374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.401649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.401658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.401977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.401987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.402290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.402300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.402647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.402657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.402969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.402979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.403257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.403267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.403557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.403567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.403749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.403760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.404077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.404086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.404388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.404397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.404684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.404694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.404983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.404993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.405347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.405357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.405667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.405676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.405980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.405989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.406289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.406299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.406591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.406600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.406738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.406748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.406947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.406956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.407135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.407145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.407385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.407395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.407618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.407627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.407903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.407912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.408239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.408249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.184 [2024-12-07 11:50:32.408557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.184 [2024-12-07 11:50:32.408567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.184 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.408874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.408883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.409166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.409175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.409569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.409578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.409875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.409884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.410215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.410224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.410409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.410418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.410723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.410732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.410905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.410916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.411256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.411266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.411500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.411509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.411844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.411853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.412156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.412165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.412467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.412477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.412677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.412686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.413042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.413053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.413149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.413158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.413438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.413447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.413746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.413755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.414073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.414082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.414385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.414394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.414702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.414712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.415067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.415077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.415449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.415459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.415752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.415761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.416076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.416086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.416300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.416310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.416585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.416594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.416894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.416904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.417090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.417101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.417387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.417396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.417704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.417714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.418016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.418030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.418341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.418350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.418509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.418519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.418723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.418733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.418913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.418923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.419224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.419233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.185 [2024-12-07 11:50:32.419526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.185 [2024-12-07 11:50:32.419544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.185 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.419843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.419852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.420058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.420068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.420337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.420347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.420655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.420666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.420978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.420988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.421314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.421324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.421528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.421538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.421870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.421880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.422186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.422195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.422478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.422489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.422757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.422766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.423061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.423071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.423376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.423385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.423678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.423688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.424002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.424021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.424323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.424332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.424616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.424625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.424813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.424823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.425155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.425164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.425478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.425489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.425803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.425812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.426127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.426137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.426521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.426530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.426839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.426848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.427179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.427189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.427508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.427517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.427720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.427729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.428063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.428073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.428395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.428404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.428757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.428766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.429049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.429059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.429281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.429290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.429598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.429607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.429895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.429904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.430285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.430295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.430610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.430619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.430904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.430915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.186 [2024-12-07 11:50:32.431225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.186 [2024-12-07 11:50:32.431235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.186 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.431525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.431539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.431837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.431846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.432160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.432170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.432487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.432496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.432804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.432813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.433130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.433140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.433432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.433442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.433723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.433732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.434016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.434026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.434236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.434245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.434572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.434581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.434748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.434763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.435047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.435057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.435378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.435387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.435697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.435707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.436030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.436039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.436268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.436279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.436576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.436585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.436893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.436903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.437183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.437196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.437367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.437377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.437651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.437660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.437967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.437976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.438269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.438280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.438587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.438596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.438893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.438902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.439224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.439234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.439544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.439553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.439879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.439888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.440175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.440185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.440503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.440513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.440817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.440827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.441141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.441151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.441454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.441465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.441770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.441779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.442063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.442073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.442363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.442372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.187 qpair failed and we were unable to recover it. 00:38:33.187 [2024-12-07 11:50:32.442681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.187 [2024-12-07 11:50:32.442691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.443000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.443014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.443343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.443354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.443652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.443662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.443970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.443981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.444265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.444275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.444458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.444468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.444821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.444831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.445022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.445033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.445339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.445349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.445662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.445672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.445955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.445964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.446273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.446283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.446591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.446599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.446888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.446900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.447201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.447211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.447504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.447514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.447830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.447840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.448128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.448138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.448434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.448444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.448758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.448767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.449077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.449087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.449393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.449403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.449683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.449694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.449995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.450004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.450301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.450310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.450600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.450609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.450912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.450921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.451237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.451246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.451556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.451566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.451901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.451911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.452204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.452214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.452537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.452546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.452857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.452866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.453201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.188 [2024-12-07 11:50:32.453212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.188 qpair failed and we were unable to recover it. 00:38:33.188 [2024-12-07 11:50:32.453541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.453551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.453778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.453788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.454122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.454131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.454436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.454445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.454754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.454764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.455072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.455082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.455405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.455414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.455577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.455588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.455946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.455956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.456262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.456272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.456581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.456591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.456917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.456930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.457222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.457232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.457544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.457553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.457862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.457872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.458166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.458176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.458375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.458384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.458701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.458709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.459103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.459114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.459385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.459396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.459770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.459779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.459989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.459998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.460344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.460354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.460655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.460665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.460968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.460978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.461285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.461295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.461455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.461465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.461727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.461737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.461909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.461921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.462273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.462283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.462482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.462491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.462820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.462830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.463138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.463147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.463443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.463452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.463768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.463777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.464071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.464081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.464399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.464408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.464690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.189 [2024-12-07 11:50:32.464706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.189 qpair failed and we were unable to recover it. 00:38:33.189 [2024-12-07 11:50:32.465008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.465027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.465400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.465409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.465716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.465726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.466035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.466045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.466355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.466364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.466664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.466674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.466986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.466995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.467284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.467295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.467598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.467608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.467891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.467900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.468098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.468109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.468429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.468439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.468742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.468752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.468921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.468931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.469221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.469231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.469555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.469564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.469767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.469779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.469961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.469971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.470245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.470255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.470584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.470594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.470954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.470964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.471292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.471304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.471601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.471610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.471920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.471930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.472121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.472131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.472461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.472470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.472752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.472762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.472957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.472966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.473251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.473260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.473559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.473569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.473870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.473879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.474159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.474169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.474475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.474485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.474814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.474824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.475133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.475143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.475457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.475467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.475800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.475810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.190 qpair failed and we were unable to recover it. 00:38:33.190 [2024-12-07 11:50:32.476083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.190 [2024-12-07 11:50:32.476097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.476405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.476416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.476732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.476742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.477047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.477057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.477349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.477359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.477545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.477555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.477884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.477894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.478119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.478129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.478458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.478468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.478751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.478767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.479150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.479161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.479471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.479481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.479667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.479677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.479988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.479997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.480359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.480370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.480678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.480687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.480855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.480865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.481133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.481143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.481472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.481483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.481872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.481881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.482209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.482220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.482526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.482535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.482683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.482693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.483047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.483057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.483242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.483255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.483554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.483563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.483852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.483861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.484242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.484252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.484542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.484551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.484720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.484729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.485009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.485028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.485375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.485385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.485699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.485708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.486018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.486028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.486337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.486346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.486653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.486662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.486949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.486959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.487247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.487256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.191 qpair failed and we were unable to recover it. 00:38:33.191 [2024-12-07 11:50:32.487571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.191 [2024-12-07 11:50:32.487581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.487921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.487930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.488223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.488232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.488537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.488546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.488852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.488862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.489167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.489177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.489494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.489504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.489813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.489823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.490128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.490138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.490456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.490466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.490802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.490811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.491111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.491121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.491453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.491463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.491775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.491785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.492074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.492085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.492443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.492452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.492767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.492776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.493075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.493085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.493408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.493417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.493728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.493738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.494056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.494066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.494376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.494386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.494544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.494555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.494848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.494857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.495162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.495172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.495375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.495384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.495588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.495602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.495948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.495957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.496130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.496140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.496515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.496524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.496831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.496841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.497162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.497171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.497481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.497493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.497804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.497814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.498107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.498117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.498454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.498464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.498776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.498785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.499079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.192 [2024-12-07 11:50:32.499089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.192 qpair failed and we were unable to recover it. 00:38:33.192 [2024-12-07 11:50:32.499380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.193 [2024-12-07 11:50:32.499389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-12-07 11:50:32.499679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.193 [2024-12-07 11:50:32.499689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-12-07 11:50:32.499998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.193 [2024-12-07 11:50:32.500008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-12-07 11:50:32.500253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.193 [2024-12-07 11:50:32.500262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-12-07 11:50:32.500595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.193 [2024-12-07 11:50:32.500606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-12-07 11:50:32.500944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.193 [2024-12-07 11:50:32.500954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-12-07 11:50:32.501249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.193 [2024-12-07 11:50:32.501259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-12-07 11:50:32.501605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.193 [2024-12-07 11:50:32.501615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-12-07 11:50:32.501798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.193 [2024-12-07 11:50:32.501809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-12-07 11:50:32.502101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.193 [2024-12-07 11:50:32.502111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-12-07 11:50:32.502430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.193 [2024-12-07 11:50:32.502439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.502758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.502769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.503074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.503084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.503401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.503410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.503687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.503696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.504017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.504027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.504336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.504345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.504653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.504664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.504970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.504979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.505156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.505166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.505537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.505547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.505852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.505863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.506201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.506212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.506495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.506504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.506795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.506805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.507114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.507124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.507413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.507423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.507732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.507741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.508116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.508126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.508332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.508341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.508629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.508641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.508847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.508857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.509187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.509197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.509481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.509496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.509788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.509797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.510133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.468 [2024-12-07 11:50:32.510143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.468 qpair failed and we were unable to recover it. 00:38:33.468 [2024-12-07 11:50:32.510451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.510461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.510746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.510756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.511068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.511078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.511381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.511391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.511719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.511728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.512013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.512023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.512348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.512357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.512647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.512657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.512940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.512950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.513245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.513256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.513558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.513567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.513868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.513877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.514199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.514209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.514497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.514506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.514796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.514806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.515087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.515097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.515459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.515472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.515766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.515776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.515959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.515978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.516203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.516214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.516530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.516540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.516857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.516866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.517180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.517191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.517489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.517498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.517799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.517809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.518114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.518124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.518448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.518458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.518765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.518775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.519029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.519039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.519387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.519396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.519711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.519720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.520015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.520024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.520342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.520351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.520658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.520667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.520967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.520977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.521287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.521296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.521579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.521589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.469 [2024-12-07 11:50:32.521892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.469 [2024-12-07 11:50:32.521902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.469 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.522121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.522131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.522457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.522467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.522768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.522778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.523084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.523093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.523406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.523415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.523723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.523732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.524113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.524123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.524448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.524457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.524661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.524671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.524852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.524861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.525077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.525086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.525380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.525389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.525603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.525612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.525796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.525805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.526110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.526121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.526299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.526310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.526641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.526650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.526952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.526962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.527262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.527271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.527456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.527466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.527662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.527672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.527972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.527983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.528317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.528326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.528697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.528706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.528994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.529004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.529340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.529349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.529725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.529734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.530029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.530038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.530342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.530352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.530659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.530669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.530987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.530996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.531193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.531203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.531527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.531537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.531708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.531719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.532013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.532024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.532326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.532335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.532674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.532683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.470 qpair failed and we were unable to recover it. 00:38:33.470 [2024-12-07 11:50:32.532993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.470 [2024-12-07 11:50:32.533002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.533300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.533310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.533623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.533632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.533956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.533966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.534271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.534281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.534641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.534666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.534843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.534852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.535034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.535044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.535407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.535417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.535711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.535721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.536024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.536034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.536347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.536364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.536641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.536650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.536932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.536943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.537121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.537132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.537459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.537468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.537669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.537679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.537977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.537986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.538289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.538299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.538664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.538673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.538971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.538980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.539292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.539301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.539617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.539627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.539935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.539945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.540253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.540264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.540554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.540564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.540883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.540893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.541175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.541184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.541456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.541465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.541789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.541799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.542106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.542116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.542394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.542403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.542700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.542709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.543000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.543009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.543382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.543391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.543746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.543756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.544056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.544065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.544390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.471 [2024-12-07 11:50:32.544399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.471 qpair failed and we were unable to recover it. 00:38:33.471 [2024-12-07 11:50:32.544705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.544715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.545021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.545030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.545333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.545342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.545668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.545679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.545983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.545993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.546158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.546170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.547090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.547115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.547399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.547412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.547784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.547794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.548076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.548089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.548399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.548408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.548718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.548728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.549046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.549055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.549353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.549364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.549544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.549554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.549760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.549770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.550052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.550062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.550275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.550284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.550598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.550607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.550932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.550941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.551252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.551262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.551542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.551551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.551857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.551868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.552164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.552174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.552483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.552493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.552799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.552808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.553116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.553128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.553412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.553422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.553693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.553702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.554015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.554025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.554356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.554366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.554672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.554685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.555068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.472 [2024-12-07 11:50:32.555078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.472 qpair failed and we were unable to recover it. 00:38:33.472 [2024-12-07 11:50:32.555348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.555358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.555578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.555589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.555900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.555909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.556181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.556191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.556509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.556519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.556915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.556924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.557244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.557253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.557543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.557552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.557734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.557745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.558053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.558063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.558347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.558357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.558667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.558677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.558983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.558994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.559308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.559320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.559672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.559682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.559988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.559998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.560290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.560300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.560605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.560614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.560923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.560933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.561271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.561281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.561701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.561710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.561919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.561929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.562315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.562324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.562408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.562417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.562714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.562724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.562917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.562929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.563227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.563238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.563550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.563560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.563864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.563874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.564086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.564097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.564318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.564328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.564549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.564560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.564923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.564933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.565097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.565111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.565407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.565418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.565732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.565742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.565918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.565929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.473 [2024-12-07 11:50:32.566283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.473 [2024-12-07 11:50:32.566293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.473 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.566603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.566613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.566789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.566799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.567077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.567088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.567462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.567472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.567860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.567870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.568079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.568090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.568429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.568440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.568748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.568758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.568972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.568982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.569345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.569356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.569665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.569675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.570020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.570031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.570312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.570322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.570627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.570638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.570966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.570976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.571212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.571222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.571533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.571543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.571931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.571941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.572259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.572270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.572580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.572590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.572752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.572763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.572940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.572950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.573155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.573165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.573509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.573520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.573816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.573831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.574057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.574074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.574369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.574380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.574574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.574584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.574890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.574900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.575073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.575084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.575413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.575423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.575733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.575743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.576061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.576072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.576480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.576490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.576833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.576843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.577047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.577060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.577271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.577281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.474 qpair failed and we were unable to recover it. 00:38:33.474 [2024-12-07 11:50:32.577602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.474 [2024-12-07 11:50:32.577612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.577803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.577814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.578062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.578073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.578343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.578352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.578721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.578731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.579089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.579100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.579294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.579304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.579603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.579614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.579925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.579935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.580307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.580317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.580628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.580638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.580825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.580836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.581023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.581033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.581326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.581336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.581686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.581697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.581905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.581915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.582224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.582234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.582549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.582560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.582721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.582732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.582939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.582949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.583146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.583155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.583367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.583376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.583630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.583639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.583808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.583817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.584113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.584123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.584423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.584433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.584724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.584733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.585049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.585059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.585407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.585417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.585598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.585608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.585971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.585980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.586365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.586374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.586566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.586575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.586680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.586689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.587008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.587021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.587382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.587391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.587617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.587626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.588006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.588025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.475 qpair failed and we were unable to recover it. 00:38:33.475 [2024-12-07 11:50:32.588354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.475 [2024-12-07 11:50:32.588365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.588685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.588695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.588981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.588991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.589217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.589226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.589432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.589441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.589778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.589787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.589989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.589999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.590394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.590403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.590684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.590693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.591034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.591044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.591276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.591285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.591605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.591614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.591941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.591951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.592387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.592400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.592702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.592712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.592880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.592889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.593114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.593124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.593423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.593433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.593751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.593761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.594134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.594143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.594549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.594558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.594874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.594883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.595184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.595193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.595503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.595512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.595761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.595770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.596209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.596219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.596534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.596543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.596879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.596888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.597091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.597101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.597447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.597456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.597767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.597777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.597940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.597952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.598157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.598166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.598337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.598347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.598655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.598665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.598931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.598940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.599307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.599317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.599506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.476 [2024-12-07 11:50:32.599515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.476 qpair failed and we were unable to recover it. 00:38:33.476 [2024-12-07 11:50:32.599803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.599813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.600123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.600133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.600420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.600433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.600618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.600628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.600926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.600936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.601242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.601252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.601631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.601640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.601743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.601751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.602080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.602089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.602388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.602397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.602730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.602740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.603054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.603065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.603393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.603403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.603707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.603717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.604022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.604033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.604077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.604087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.604406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.604415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.604606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.604615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.604930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.604939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.605199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.605209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.605402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.605412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.605711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.605720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.606006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.606020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.606186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.606197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.606408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.606417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.606742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.606752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.607046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.607056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.607227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.607237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.607563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.607572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.607887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.607898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.608180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.608190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.608488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.608498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.608821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.608830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.477 [2024-12-07 11:50:32.609026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.477 [2024-12-07 11:50:32.609036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.477 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.609259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.609270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.609582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.609591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.609890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.609900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.610198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.610207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.610505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.610515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.610827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.610840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.611040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.611051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.611332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.611341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.611637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.611649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.611959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.611969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.612136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.612146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.612372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.612382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.612597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.612607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.612878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.612887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.613198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.613208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.613534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.613545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.613753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.613763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.613940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.613950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.614230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.614240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.614624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.614633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.614930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.614940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.615121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.615130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.615464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.615474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.615540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.615549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.615818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.615828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.616017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.616028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.616302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.616311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.616635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.616644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.616955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.616965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.617253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.617263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.617548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.617557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.617841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.617851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.618017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.618027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.618301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.618311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.618630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.618640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.618950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.618960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.619304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.619313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.478 [2024-12-07 11:50:32.619605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.478 [2024-12-07 11:50:32.619614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.478 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.619938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.619947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.620123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.620133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.620466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.620476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.620796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.620806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.621059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.621069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.621399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.621408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.621581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.621591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.621945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.621954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.622119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.622129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.622462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.622472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.622704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.622717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.623047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.623057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.623349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.623359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.623548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.623558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.623873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.623882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.624096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.624107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.624400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.624409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.624694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.624710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.625019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.625029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.625335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.625344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.625721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.625731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.626027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.626037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.626101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.626110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.626400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.626409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.626806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.626816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.627129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.627139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.627435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.627444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.627610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.627619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.627834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.627843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.628155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.628165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.628377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.628387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.628728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.628740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.629001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.629016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.629329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.629339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.629670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.629680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.630070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.630080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.630298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.630307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.479 qpair failed and we were unable to recover it. 00:38:33.479 [2024-12-07 11:50:32.630462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.479 [2024-12-07 11:50:32.630472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.630763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.630773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.631078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.631087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.631412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.631421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.631836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.631846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.632127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.632137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.632486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.632495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.632799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.632809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.633004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.633016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.633370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.633380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.633695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.633704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.634091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.634102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.634276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.634285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.634607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.634618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.634814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.634824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.635154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.635164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.635485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.635494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.635777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.635788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.636156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.636166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.636511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.636521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.636823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.636832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.637146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.637155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.637465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.637475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.637770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.637780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.638078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.638088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.638297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.638305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.638491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.638501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.638719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.638729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.639059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.639068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.639390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.639399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.639717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.639726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.640029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.640039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.640353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.640361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.640668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.640678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.640982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.640992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.641297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.641308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.641617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.641627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.641934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.641943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.480 qpair failed and we were unable to recover it. 00:38:33.480 [2024-12-07 11:50:32.642024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.480 [2024-12-07 11:50:32.642034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.642085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.642096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.642367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.642377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.642688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.642697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.643084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.643094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.643253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.643263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.643569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.643579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.643886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.643895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.644218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.644228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.644533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.644542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.644900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.644910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.645110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.645120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.645344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.645353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.645671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.645680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.646008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.646023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.646301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.646313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.646578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.646587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.646866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.646882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.647091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.647100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.647416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.647427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.647618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.647631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.647915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.647925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.648239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.648249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.648435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.648444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.648798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.648808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.649118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.649128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.649299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.649309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.649694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.649705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.650007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.650024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.650330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.650340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.650634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.650643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.650843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.650852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.651065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.651074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.651462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.651471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.651761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.651770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.652084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.652093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.652400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.481 [2024-12-07 11:50:32.652411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.481 qpair failed and we were unable to recover it. 00:38:33.481 [2024-12-07 11:50:32.652755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.652764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.653058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.653068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.653361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.653370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.653692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.653702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.654017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.654027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.654332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.654342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.654647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.654656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.654963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.654973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.655282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.655293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.655662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.655672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.655979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.655988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.656267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.656277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.656585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.656594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.656876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.656887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.657071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.657082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.657403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.657412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.657720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.657729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.658022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.658032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.658192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.658210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.658515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.658525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.658825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.658835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.659161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.659171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.659480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.659489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.659792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.659801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.660090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.660100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.660371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.660380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.660620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.660630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.660956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.660966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.661135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.661145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.661464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.661474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.661769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.661779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.662088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.662097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.662391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.662402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.662689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.482 [2024-12-07 11:50:32.662700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.482 qpair failed and we were unable to recover it. 00:38:33.482 [2024-12-07 11:50:32.662893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.662902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.663181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.663191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.663483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.663499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.663793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.663802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.664083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.664092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.664292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.664301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.664577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.664586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.664979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.664989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.665325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.665335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.665645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.665655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.665949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.665960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.666256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.666266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.666570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.666580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.666901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.666914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.667229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.667238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.667526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.667536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.667839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.667850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.668033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.668043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.668367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.668377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.668708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.668718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.669022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.669033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.669330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.669339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.669648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.669658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.669985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.669994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.670314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.670324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.670635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.670645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.670822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.670831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.671197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.671206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.671527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.671537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.671845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.671854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.672161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.672171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.672482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.672492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.672701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.672711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.672994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.673003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.673359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.673368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.673713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.673722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.674022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.674032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.674335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.483 [2024-12-07 11:50:32.674345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.483 qpair failed and we were unable to recover it. 00:38:33.483 [2024-12-07 11:50:32.674654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.674663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.674970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.674980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.675266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.675285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.675592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.675602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.675902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.675911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.676256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.676266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.676449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.676460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.676759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.676768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.676927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.676936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.677311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.677321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.677519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.677528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.677873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.677882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.678260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.678270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.678565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.678576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.678949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.678958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.679256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.679266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.679581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.679590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.679924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.679933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.680246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.680255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.680562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.680571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.680757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.680766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.680992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.681001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.681290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.681299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.681555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.681565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.681888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.681899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.682177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.682186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.682473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.682482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.682796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.682806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.682974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.682985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.683302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.683312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.683621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.683630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.683817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.683827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.684129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.684138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.684458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.684467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.684774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.684783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.685004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.685020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.685312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.685321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.685648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.484 [2024-12-07 11:50:32.685659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.484 qpair failed and we were unable to recover it. 00:38:33.484 [2024-12-07 11:50:32.686067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.686078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.686392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.686404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.686749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.686759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.686965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.686974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.687148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.687158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.687471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.687480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.687651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.687662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.687926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.687936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.688237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.688247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.688564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.688574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.688809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.688818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.689114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.689123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.689353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.689363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.689738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.689748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.690029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.690039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.690369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.690380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.690682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.690691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.690887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.690897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.691195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.691205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.691497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.691507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.691808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.691817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.692003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.692017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.692336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.692346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.692676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.692686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.692842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.692852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.693119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.693128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.693441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.693450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.693748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.693757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.694078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.694087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.694284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.694293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.694605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.694615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.694790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.694800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.695019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.695029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.695232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.695248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.695552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.695561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.695845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.695854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.696165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.696175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.485 [2024-12-07 11:50:32.696499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.485 [2024-12-07 11:50:32.696509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.485 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.696818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.696829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.697175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.697184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.697479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.697489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.697775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.697785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.698087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.698097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.698411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.698420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.698617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.698627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.698818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.698828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.699121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.699131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.699190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.699200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.699507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.699517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.699821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.699830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.700138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.700148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.700340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.700351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.700660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.700669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.700953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.700963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.701339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.701349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.701657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.701669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.701870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.701880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.702176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.702186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.702520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.702528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.702840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.702850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.703157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.703166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.703479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.703490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.703789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.703799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.704081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.704091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.704375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.704385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.704553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.704566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.704846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.704857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.705143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.705153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.705442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.705452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.705762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.705773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.706072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.706082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.706378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.486 [2024-12-07 11:50:32.706388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.486 qpair failed and we were unable to recover it. 00:38:33.486 [2024-12-07 11:50:32.706680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.706689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.706971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.706987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.707288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.707297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.707571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.707581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.707887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.707897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.708207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.708217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.708524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.708533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.708816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.708826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.709123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.709133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.709452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.709461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.709767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.709777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.710104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.710113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.710448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.710459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.710774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.710783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.711100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.711110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.711434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.711444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.711710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.711720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.712044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.712053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.712374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.712384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.712679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.712689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.713034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.713045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.713360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.713370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.713678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.713687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.714059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.714072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.714366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.714376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.714683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.714693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.715014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.715024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.715295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.715304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.715510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.715519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.715841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.715850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.716162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.716172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.716466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.716475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.716780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.716789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.717095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.717105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.717416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.717426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.717714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.717723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.717925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.717935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.487 [2024-12-07 11:50:32.718222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.487 [2024-12-07 11:50:32.718231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.487 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.718538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.718547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.718843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.718852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.719150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.719160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.719434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.719444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.719646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.719656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.719950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.719960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.720266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.720275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.720556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.720566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.720874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.720884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.721209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.721218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.721533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.721542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.721849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.721858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.722260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.722270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.722567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.722576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.722785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.722794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.723076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.723086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.723368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.723377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.723691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.723701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.724005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.724019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.724375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.724388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.724691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.724700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.725083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.725093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.725389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.725398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.725725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.725735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.726054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.726064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.726400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.726412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.726793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.726802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.727001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.727013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.727211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.727220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.727428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.727437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.727714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.727723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.728019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.728029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.728324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.728333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.728612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.728621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.728816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.728826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.729096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.729106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.729433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.729443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.488 [2024-12-07 11:50:32.729764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.488 [2024-12-07 11:50:32.729773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.488 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.730031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.730041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.730346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.730356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.730664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.730673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.730978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.730987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.731266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.731276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.731589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.731598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.731906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.731915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.732220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.732229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.732535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.732544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.732726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.732735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.733173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.733184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.733486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.733495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.733691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.733700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.734023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.734032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.734336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.734346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.734644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.734654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.734814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.734824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.735153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.735163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.735464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.735473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.735782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.735791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.736096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.736105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.736413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.736424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.736707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.736716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.737057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.737067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.737349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.737359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.737676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.737687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.737995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.738005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.738293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.738306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.738491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.738500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.738809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.738819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.739136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.739146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.739457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.739472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.739854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.739864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.740063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.740073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.740296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.740306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.740578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.740587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.740798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.740807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.489 [2024-12-07 11:50:32.741119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.489 [2024-12-07 11:50:32.741129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.489 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.741448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.741457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.741755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.741765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.741951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.741962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.742311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.742320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.742623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.742632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.742922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.742931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.743243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.743253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.743561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.743573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.743876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.743886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.744181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.744191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.744502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.744511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.744820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.744829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.745139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.745149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.745474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.745483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.745752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.745761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.745918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.745929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.746207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.746217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.746506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.746516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.746863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.746872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.747031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.747042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.747371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.747380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.747709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.747719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.748026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.748036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.748352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.748362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.748540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.748550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.748912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.748921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.749238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.749247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.749565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.749574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.749880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.749889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.750163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.750174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.750488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.750497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.750698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.750709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.750889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.750899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.751204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.751214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.751523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.751532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.751838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.751847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.752049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.490 [2024-12-07 11:50:32.752059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.490 qpair failed and we were unable to recover it. 00:38:33.490 [2024-12-07 11:50:32.752230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.752240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.752559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.752568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.752864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.752874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.753176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.753186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.753486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.753496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.753800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.753809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.754119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.754129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.754442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.754452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.754744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.754754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.754913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.754923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.755200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.755209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.755489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.755500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.755789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.755799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.756098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.756107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.756402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.756412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.756720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.756729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.757037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.757047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.757338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.757347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.757624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.757634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.757942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.757952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.758255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.758264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.758582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.758591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.758880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.758890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.759175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.759185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.759541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.759550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.759776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.759786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.760093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.760103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.760285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.760294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.760617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.760627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.760929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.760938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.761245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.761255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.761557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.761567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.761860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.761872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.762177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.491 [2024-12-07 11:50:32.762187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.491 qpair failed and we were unable to recover it. 00:38:33.491 [2024-12-07 11:50:32.762479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.762492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.762655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.762666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.762970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.762979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.763158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.763168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.763473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.763482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.763788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.763797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.764083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.764092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.764416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.764426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.764596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.764607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.764820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.764830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.765130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.765140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.765445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.765454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.765759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.765769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.765967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.765976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.766287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.766296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.766494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.766505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.766799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.766809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.767202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.767212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.767541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.767550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.767855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.767864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.768253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.768262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.768591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.768601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.768910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.768919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.769058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.769068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.769355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.769365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.769563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.769572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.769891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.769901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.770176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.770186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.770510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.770519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.770827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.770837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.771143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.771154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.771472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.771483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.771664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.771673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.772057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.772067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.772233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.772242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.772442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.772452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.772751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.772761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.773054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.492 [2024-12-07 11:50:32.773064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.492 qpair failed and we were unable to recover it. 00:38:33.492 [2024-12-07 11:50:32.773383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.773395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.773497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.773507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.773776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.773799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.774019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.774031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.774340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.774350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.774660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.774670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.774987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.774997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.775351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.775362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.775650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.775659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.775979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.775990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.776305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.776316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.776496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.776505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.776819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.776829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.777131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.777141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.777466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.777476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.777785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.777795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.778091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.778101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.778431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.778441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.778819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.778829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.779127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.779138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.779346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.779356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.779559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.779569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.779861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.779872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.780186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.780196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.780491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.780501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.780820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.780832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.781142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.781156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.781477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.781487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.781800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.781810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.782146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.782156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.782361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.782371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.782678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.782688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.782977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.782987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.783355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.783367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.783654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.783664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.783996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.784007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.784122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.784132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.493 [2024-12-07 11:50:32.784204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.493 [2024-12-07 11:50:32.784214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.493 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.784485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.784495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.784671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.784682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.784876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.784888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.785086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.785096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.785308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.785318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.785593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.785603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.785903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.785912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.786227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.786238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.786417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.786428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.786722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.786732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.787025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.787035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.787396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.787406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.787735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.787745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.788048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.788059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.788267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.788277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.788594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.788605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.788786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.788797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.788979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.788989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.789195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.789206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.789516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.789526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.789856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.789866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.790190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.790200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.790407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.790417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.790735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.790745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.790943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.790953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.791140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.791150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.791469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.791479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.791788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.791798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.792141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.792150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.792460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.792471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.792661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.792671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.792887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.792897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.793094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.793104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.793408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.793417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.793748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.793758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.794067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.794077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.794434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.794444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.494 [2024-12-07 11:50:32.794775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.494 [2024-12-07 11:50:32.794785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.494 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.795082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.795092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.795248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.795257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.795536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.795546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.795882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.795892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.796241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.796253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.796558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.796567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.796753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.796763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.796959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.796968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.797171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.797181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.797410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.797420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.797725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.797735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.798056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.798066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.798264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.798274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.798612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.798622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.798978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.798991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.799282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.799292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.799653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.799662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.799976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.799985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.800311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.800321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.800615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.800624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.800798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.800807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.801119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.801129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.801458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.801467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.801760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.801771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.802072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.802082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.802311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.802320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.802620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.802630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.802824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.802833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.803160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.803170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.803400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.803410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.803557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.803567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.803898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.803907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.804245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.804255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.495 [2024-12-07 11:50:32.804451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.495 [2024-12-07 11:50:32.804460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.495 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.804734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.804745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.805056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.805067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.805319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.805328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.805725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.805734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.806021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.806031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.806332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.806342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.806743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.806752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.806933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.806942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.807166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.807176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.807498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.807507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.807786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.807796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.807991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.808002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.808183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.808193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.808497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.808506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.808840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.808850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.809166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.809177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.809515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.809525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.809684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.809695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.810000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.810009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.810111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.810123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.810446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.810456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.810668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.771 [2024-12-07 11:50:32.810677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.771 qpair failed and we were unable to recover it. 00:38:33.771 [2024-12-07 11:50:32.811048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.811058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.811364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.811373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.811684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.811694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.812002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.812016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.812203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.812213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.812524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.812533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.812861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.812870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.813101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.813111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.813458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.813467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.813753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.813762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.813973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.813982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.814347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.814356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.814587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.814596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.814899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.814908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.815235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.815245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.815437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.815449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.815637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.815647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.815887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.815896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.816122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.816131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.816364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.816373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.816661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.816678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.816992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.817002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.817356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.817370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.817554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.817563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.817777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.817786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.818007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.818021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.818381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.818390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.818718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.818728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.818917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.818926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.819324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.819334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.819646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.819655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.819859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.819868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.820173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.820183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.820225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.820235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.820568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.820578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.820872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.820881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.821198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.821208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.772 [2024-12-07 11:50:32.821425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.772 [2024-12-07 11:50:32.821434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.772 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.821746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.821755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.822070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.822079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.822415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.822424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.822612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.822621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.822932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.822942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.823307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.823317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.823630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.823640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.823945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.823955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.824181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.824191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.824516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.824525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.824888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.824897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.825191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.825201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.825527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.825537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.825822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.825832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.826146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.826155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.826466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.826475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.826786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.826796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.826984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.826995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.827364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.827373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.827718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.827728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.828067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.828077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.828276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.828285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.828474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.828483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.828856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.828866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.829076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.829085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.829425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.829435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.829762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.829771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.830084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.830093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.830397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.830407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.830709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.830718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.831029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.831039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.831361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.831370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.831531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.831541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.831905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.831914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.832119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.832129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.832334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.832343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.832398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.832407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.773 qpair failed and we were unable to recover it. 00:38:33.773 [2024-12-07 11:50:32.832695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.773 [2024-12-07 11:50:32.832704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.832916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.832925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.833281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.833291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.833636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.833646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.833996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.834006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.834188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.834197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.834550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.834559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.834845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.834855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.835056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.835066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.835396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.835405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.835755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.835765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.835953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.835977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.836281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.836291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.836600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.836610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.836920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.836930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.837106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.837116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.837333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.837343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.837713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.837723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.838026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.838036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.838373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.838382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.838680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.838691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.838873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.838883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.839205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.839215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.839542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.839551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.839841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.839851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.840043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.840053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.840443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.840453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.840641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.840650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.840836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.840845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.841167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.841176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.841474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.841483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.841678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.841688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.841903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.841913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.842137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.842147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.842379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.842388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.842791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.842800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.843100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.843110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.843284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.774 [2024-12-07 11:50:32.843294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.774 qpair failed and we were unable to recover it. 00:38:33.774 [2024-12-07 11:50:32.843614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.843623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.844015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.844025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.844365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.844374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.844699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.844708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.845013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.845024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.845310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.845319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.845607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.845617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.845931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.845941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.845991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.846000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.846359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.846369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.846695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.846705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.846999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.847009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.847294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.847306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.847590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.847600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.847928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.847938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.848264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.848275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.848584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.848594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.848758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.848768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.849149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.849159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.849461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.849470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.849574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.849582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.849773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.849783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.849959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.849971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.850279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.850288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.850601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.850610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.850804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.850814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.851000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.851018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.851257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.851267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.851596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.851606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.851917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.851927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.852223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.852233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.852407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.852416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.852752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.852762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.775 [2024-12-07 11:50:32.853173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.775 [2024-12-07 11:50:32.853183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.775 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.853407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.853417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.853766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.853775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.854030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.854039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.854370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.854382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.854570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.854579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.854897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.854907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.855209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.855218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.855549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.855559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.855889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.855899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.856206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.856216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.856551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.856560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.856885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.856895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.857243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.857253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.857565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.857575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.857763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.857774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.857986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.857996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.858161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.858172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.858490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.858499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.858902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.858912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.859170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.859180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.859560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.859570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.859649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.859658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.859976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.859987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.860329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.860340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.860655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.860665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.861003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.861015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.861340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.861349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.861640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.861650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.861963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.861974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.862193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.862202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.862514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.862523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.862698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.862707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.862864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.862874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.863242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.863251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.863441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.863450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.863739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.863749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.864094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.776 [2024-12-07 11:50:32.864103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.776 qpair failed and we were unable to recover it. 00:38:33.776 [2024-12-07 11:50:32.864381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.864390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.864703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.864712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.864887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.864896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.865102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.865112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.865442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.865451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.865749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.865760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.866073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.866084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.866381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.866390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.866707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.866717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.867028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.867039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.867419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.867428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.867734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.867743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.867951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.867961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.868271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.868281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.868568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.868577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.868905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.868914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.869178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.869187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.869476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.869486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.869896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.869910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.870129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.870140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.870474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.870483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.870799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.870808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.871123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.871133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.871497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.871507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.871786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.871795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.871977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.871986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.872316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.872326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.872636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.872645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.872950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.872959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.873156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.873167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.873457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.873469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.873777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.873788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.874073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.874084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.874407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.874417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.874711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.874721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.875077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.875087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.875390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.875399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.875695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.777 [2024-12-07 11:50:32.875705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.777 qpair failed and we were unable to recover it. 00:38:33.777 [2024-12-07 11:50:32.876067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.876077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.876431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.876440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.876767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.876776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.877080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.877090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.877267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.877277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.877623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.877632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.877917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.877926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.878221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.878232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.878523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.878533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.878841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.878851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.879252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.879262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.879524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.879533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.879857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.879866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.880176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.880186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.880485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.880495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.880818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.880828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.881114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.881124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.881407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.881423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.881722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.881731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.882019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.882029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.882421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.882430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.882713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.882723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.883037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.883047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.883407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.883416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.883710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.883720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.884020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.884030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.884342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.884351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.884576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.884586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.884899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.884908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.885126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.885136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.885437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.885446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.885627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.885638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.885949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.885959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.886261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.886272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.886574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.886583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.886842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.886852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.887152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.887162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.887358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.778 [2024-12-07 11:50:32.887368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.778 qpair failed and we were unable to recover it. 00:38:33.778 [2024-12-07 11:50:32.887688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.887697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.887990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.887999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.888374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.888384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.888699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.888709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.889018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.889028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.889319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.889328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.889614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.889624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.889918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.889927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.890231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.890241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.890550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.890559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.890848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.890857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.891179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.891189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.891404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.891413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.891607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.891617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.891948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.891957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.892117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.892127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.892289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.892300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.892609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.892618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.892957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.892973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.893273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.893283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.893458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.893468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.893709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.893719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.894046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.894056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.894304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.894313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.894677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.894687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.894959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.894969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.895295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.895305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.895610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.895619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.895924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.895933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.896226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.896236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.896528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.896538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.896820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.896830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.897133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.897143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.897450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.897461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.897753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.897763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.898073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.898086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.898435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.898446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.779 [2024-12-07 11:50:32.898778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.779 [2024-12-07 11:50:32.898788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.779 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.899147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.899158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.899480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.899491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.899747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.899757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.900068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.900078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.900375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.900392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.900692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.900702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.901014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.901024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.901213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.901223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.901543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.901553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.901897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.901907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.902088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.902098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.902335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.902344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.902665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.902674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.902973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.902984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.903287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.903298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.903603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.903614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.903970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.903980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.904276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.904287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.904498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.904508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.904825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.904835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.905124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.905135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.905464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.905474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.905697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.905707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.905991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.906001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.906268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.906280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.906601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.906612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.906933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.906943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.907246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.907256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.907552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.907562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.907870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.907880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.908194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.908204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.908376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.908387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.908646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.908656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.908860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.780 [2024-12-07 11:50:32.908871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.780 qpair failed and we were unable to recover it. 00:38:33.780 [2024-12-07 11:50:32.909000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.909014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.909225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.909235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.909619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.909629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.909926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.909938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.910246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.910256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.910535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.910545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.910852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.910863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.911100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.911109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.911478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.911487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.911768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.911778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.911991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.912005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.912318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.912328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.912610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.912620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.912813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.912823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.913082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.913092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.913411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.913421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.913786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.913795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.914132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.914142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.914446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.914455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.914738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.914748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.914934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.914944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.915148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.915158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.915478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.915487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.915794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.915803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.916093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.916103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.916327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.916337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.916560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.916569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.916871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.916881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.917182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.917192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.917491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.917507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.917816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.917826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.918193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.918204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.918534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.918543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.918852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.918861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.919077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.919087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.919284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.919294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.919619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.919629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.781 qpair failed and we were unable to recover it. 00:38:33.781 [2024-12-07 11:50:32.919953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.781 [2024-12-07 11:50:32.919963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.920247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.920257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.920594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.920604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.920906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.920916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.921223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.921232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.921520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.921530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.921735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.921746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.922118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.922127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.922425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.922435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.922782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.922791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.923199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.923209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.923420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.923429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.923617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.923626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.923817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.923827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.924051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.924060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.924386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.924395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.924706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.924715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.925017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.925027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.925358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.925367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.925665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.925674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.925887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.925897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.926199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.926209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.926507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.926516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.926829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.926839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.927220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.927229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.927551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.927560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.927873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.927882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.928082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.928091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.928412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.928422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.928734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.928743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.929074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.929084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.929386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.929396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.929701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.929710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.930147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.930157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.930479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.930488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.930693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.930703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.930893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.930903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.931217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.931230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.782 qpair failed and we were unable to recover it. 00:38:33.782 [2024-12-07 11:50:32.931524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.782 [2024-12-07 11:50:32.931533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.931845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.931855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.932029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.932040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.932489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.932499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.932790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.932799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.933084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.933094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.933448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.933457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.933763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.933772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.934087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.934098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.934320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.934329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.934508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.934517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.934838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.934847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.934886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.934895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.935056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.935066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.935326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.935336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.935611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.935621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.935983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.935992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.936351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.936361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.936723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.936732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.937040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.937050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.937267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.937276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.937597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.937606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.937892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.937902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.938186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.938195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.938580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.938589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.938893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.938902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.939254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.939264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.939587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.939597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.939903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.939913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.940046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.940056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.940375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.940384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.940705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.940714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.940973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.940983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.941289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.941300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.941612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.941622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.941946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.941955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.942271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.942281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.942443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.783 [2024-12-07 11:50:32.942454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.783 qpair failed and we were unable to recover it. 00:38:33.783 [2024-12-07 11:50:32.942801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.942811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.943138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.943148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.943466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.943476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.943625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.943634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.943922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.943931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.944331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.944341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.944629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.944639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.944966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.944975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.945205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.945215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.945536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.945545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.945658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.945669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.945959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.945968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.946306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.946316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.946605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.946614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.946829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.946838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.947177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.947187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.947524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.947533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.947825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.947834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.948157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.948166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.948359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.948369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.948640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.948650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.948903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.948912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.949274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.949283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.949465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.949474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.949734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.949743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.949911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.949925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.950127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.950137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.950486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.950495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.950794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.950803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.951074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.951084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.951376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.951385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.951688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.951697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.951877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.951887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.952233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.952242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.952623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.952632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.952830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.952839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.952996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.953006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.953305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.784 [2024-12-07 11:50:32.953314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.784 qpair failed and we were unable to recover it. 00:38:33.784 [2024-12-07 11:50:32.953630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.953639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.953910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.953919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.954099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.954109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.954453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.954462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.954746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.954755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.955067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.955076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.955380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.955389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.955566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.955575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.955899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.955907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.956337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.956348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.956599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.956609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.956920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.956930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.957220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.957231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.957477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.957487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.957699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.957708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.958029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.958040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.958350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.958359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.958668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.958678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.958867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.958876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.959229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.959238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.959551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.959560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.959840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.959850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.960164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.960173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.960386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.960395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.960730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.960739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.961032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.961042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.961236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.961245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.961614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.961624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.961822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.961831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.962094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.962103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.962470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.962480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.962662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.962672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.962941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.962951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.963317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.785 [2024-12-07 11:50:32.963327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.785 qpair failed and we were unable to recover it. 00:38:33.785 [2024-12-07 11:50:32.963639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.963648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.963952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.963961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.964261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.964271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.964489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.964499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.964803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.964812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.965117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.965129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.965465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.965474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.965777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.965787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.966094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.966104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.966431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.966440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.966595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.966604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.966914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.966923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.967248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.967258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.967523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.967532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.967820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.967829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.968031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.968041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.968366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.968375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.968653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.968674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.968954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.968963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.969299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.969309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.969655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.969664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.969978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.969987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.970344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.970353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.970503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.970513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.970789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.970798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.970969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.970979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.971317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.971327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.971626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.971635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.971952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.971961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.972049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.972060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.972351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.972360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.972645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.972654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.972985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.972995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.973349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.973359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.973548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.973557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.973865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.973874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.974101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.974110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.974333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.786 [2024-12-07 11:50:32.974343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.786 qpair failed and we were unable to recover it. 00:38:33.786 [2024-12-07 11:50:32.974577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.974586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.974887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.974896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.975191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.975201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.975517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.975526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.975740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.975749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.975951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.975961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.976274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.976283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.976563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.976574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.976880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.976889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.977143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.977152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.977375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.977384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.977674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.977683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.977965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.977975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.978290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.978299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.978660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.978670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.979005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.979022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.979313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.979322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.979498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.979507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.979816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.979825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.980133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.980142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.980467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.980476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.980777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.980787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.981175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.981184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.981501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.981510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.981790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.981799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.982077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.982086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.982443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.982452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.982771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.982781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.983081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.983091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.983393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.983402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.983603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.983613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.983779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.983789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.983948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.983957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.984318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.984328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.984502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.984512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.984832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.984841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.985162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.985171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.985472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.787 [2024-12-07 11:50:32.985482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.787 qpair failed and we were unable to recover it. 00:38:33.787 [2024-12-07 11:50:32.985796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.985805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.986031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.986041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.986420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.986429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.986730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.986740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.987096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.987107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.987439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.987452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.987757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.987766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.988116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.988127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.988453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.988462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.988740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.988751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.989074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.989083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.989312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.989321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.989567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.989576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.989889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.989898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.990145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.990155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.990432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.990441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.990762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.990771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.991056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.991065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.991402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.991411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.991455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.991465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.991699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.991708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.991887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.991897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.992199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.992208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.992495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.992505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.992803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.992812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.993119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.993129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.993304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.993314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.993623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.993632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.993900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.993909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.994207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.994216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.994431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.994440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.994628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.994637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.994933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.994942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.995053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.995062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.995253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.995263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.995485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.995494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.995813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.995823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.996123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.788 [2024-12-07 11:50:32.996133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.788 qpair failed and we were unable to recover it. 00:38:33.788 [2024-12-07 11:50:32.996436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:32.996445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:32.996765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:32.996774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:32.996965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:32.996974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:32.997307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:32.997316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:32.997610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:32.997619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:32.997936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:32.997946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:32.998215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:32.998224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:32.998516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:32.998525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:32.998814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:32.998823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:32.999146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:32.999156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:32.999238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:32.999248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:32.999584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:32.999595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:32.999929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:32.999939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.000238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.000248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.000529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.000538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.000844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.000853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.001167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.001177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.001344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.001354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.001528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.001537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.001851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.001860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.002162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.002171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.002489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.002499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.002728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.002737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.003115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.003125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.003204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.003214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.003527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.003537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.003844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.003853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.004211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.004221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.004514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.004523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.004876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.004886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.005151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.005162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.005457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.005471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.005774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.005784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.005974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.789 [2024-12-07 11:50:33.005983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.789 qpair failed and we were unable to recover it. 00:38:33.789 [2024-12-07 11:50:33.006320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.006330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.006654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.006663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.006961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.006970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.007293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.007303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.007602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.007611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.007921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.007931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.008095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.008106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.008449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.008458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.008744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.008754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.008951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.008960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.009251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.009261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.009606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.009615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.009930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.009940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.010247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.010257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.010550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.010559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.010863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.010872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.011183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.011194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.011515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.011527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.011723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.011733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.012064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.012074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.012293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.012302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.012650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.012659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.012966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.012975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.013282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.013292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.013473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.013484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.013676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.013686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.014001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.014014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.014355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.014364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.014680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.014689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.014880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.014889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.015214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.015224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.015562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.015571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.015863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.015873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.016167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.016177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.016484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.016493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.016817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.016826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.016986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.016996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.017339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.790 [2024-12-07 11:50:33.017349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.790 qpair failed and we were unable to recover it. 00:38:33.790 [2024-12-07 11:50:33.017706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.017716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.018020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.018029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.018342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.018358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.018648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.018658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.018741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.018750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.019035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.019045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.019441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.019451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.019774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.019784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.020093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.020102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.020406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.020422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.020725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.020735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.020931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.020940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.021344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.021353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.021546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.021555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.021897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.021906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.022330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.022339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.022635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.022644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.022962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.022972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.023149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.023160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.023483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.023494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.023779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.023789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.024079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.024090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.024436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.024445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.024652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.024665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.024978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.024987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.025074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.025083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.025188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.025198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.025400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.025410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.025729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.025738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.026031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.026041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.026345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.026354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.026722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.026732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.026886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.026895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.027218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.027227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.027525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.027536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.027865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.027874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.028074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.028084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.791 [2024-12-07 11:50:33.028430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.791 [2024-12-07 11:50:33.028439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.791 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.028764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.028773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.028859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.028867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.029089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.029100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.029425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.029434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.029745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.029754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.030077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.030086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.030444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.030454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.030751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.030760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.031093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.031104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.031416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.031426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.031715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.031725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.031865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.031875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.032102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.032112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.032487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.032496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.032842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.032851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.033082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.033091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.033414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.033424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.033733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.033742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.034140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.034150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.034522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.034531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.034729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.034738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.035076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.035088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.035396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.035409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.035611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.035620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.035938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.035948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.036301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.036311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.036655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.036664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.036986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.036995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.037294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.037304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.037513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.037522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.037855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.037866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.038173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.038183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.038375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.038385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.038657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.038666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.038972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.038982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.039155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.039166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.039490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.039499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.792 [2024-12-07 11:50:33.039811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.792 [2024-12-07 11:50:33.039821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.792 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.040002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.040017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.040309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.040318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.040646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.040656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.040971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.040980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.041327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.041336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.041649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.041659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.041989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.041998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.042296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.042306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.042599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.042608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.042894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.042904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.043191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.043202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.043520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.043533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.043830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.043840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.044154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.044164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.044457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.044472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.044771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.044780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.044950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.044959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.045277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.045286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.045583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.045592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.045785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.045795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.046176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.046186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.046502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.046511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.046818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.046827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.047116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.047127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.047448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.047458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.047775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.047785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.047956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.047967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.048286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.048295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.048589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.048598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.048906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.048915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.049230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.049240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.049543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.049553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.049854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.049865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.050170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.050179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.050353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.050363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.050550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.050560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.050895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.793 [2024-12-07 11:50:33.050904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.793 qpair failed and we were unable to recover it. 00:38:33.793 [2024-12-07 11:50:33.051187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.051197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.051514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.051524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.051842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.051852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.052041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.052052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.052365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.052375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.052722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.052732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.053031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.053041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.053329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.053338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.053644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.053653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.054005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.054018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.054355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.054365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.054657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.054667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.054947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.054956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.055246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.055256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.055571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.055580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.055774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.055783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.056079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.056089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.056410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.056420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.056705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.056715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.057047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.057057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.057340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.057350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.057543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.057553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.057772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.057782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.058082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.058092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.058381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.058391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.058697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.058706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.059017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.059029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.059318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.059328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.059688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.059698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.059891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.059901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.060103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.060113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.060194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.060204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.060450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.060460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.060643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.060653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.794 [2024-12-07 11:50:33.061007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.794 [2024-12-07 11:50:33.061024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.794 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.061316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.061325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.061552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.061561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.061876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.061885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.062186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.062199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.062528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.062538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.062868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.062877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.063191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.063200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.063364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.063374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.063564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.063574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.063734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.063744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.064068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.064077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.064289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.064299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.064582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.064592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.064841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.064850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.064919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.064928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.065173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.065183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.065465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.065475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.065811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.065820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.066132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.066142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.066442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.066451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.066758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.066768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.066979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.066988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.067182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.067191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.067526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.067536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.067809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.067819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.068135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.068145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.068508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.068517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.068817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.068826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.069148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.069158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.069461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.069471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.069642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.069652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.070021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.070035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.070342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.070351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.070635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.070653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.071034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.071043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.071356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.071365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.071522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.071532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.795 qpair failed and we were unable to recover it. 00:38:33.795 [2024-12-07 11:50:33.071758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.795 [2024-12-07 11:50:33.071769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.072039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.072048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.072353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.072363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.072672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.072684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.072991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.073001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.073299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.073308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.073609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.073619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.073944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.073953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.074147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.074157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.074484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.074494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.074675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.074685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.075045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.075055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.075257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.075266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.075617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.075626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.075938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.075948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.076159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.076168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.076493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.076503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.076581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.076590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.076863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.076873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.077177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.077187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.077502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.077511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.077807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.077825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.078176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.078186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.078499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.078509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.078690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.078700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.078901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.078911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.079186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.079196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.079520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.079530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.079832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.079841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.080188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.080199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.080511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.080525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.080696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.080705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.080878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.080888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.081179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.081190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.081508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.081519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.081718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.081727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.081952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.081962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.082256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.082266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.796 qpair failed and we were unable to recover it. 00:38:33.796 [2024-12-07 11:50:33.082575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.796 [2024-12-07 11:50:33.082584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.082893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.082902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.083192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.083202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.083533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.083542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.083853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.083863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.084178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.084188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.084522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.084532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.084836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.084846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.085026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.085035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.085358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.085367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.085725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.085735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.086030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.086040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.086243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.086253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.086534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.086544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.086754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.086765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.087107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.087116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.087284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.087294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.087620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.087630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.087965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.087974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.088287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.088297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.088605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.088614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.089004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.089017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.089376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.089386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.089728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.089738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.089903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.089912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.090185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.090195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.090392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.090401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.090731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.090740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.090940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.090949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.091272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.091282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.091601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.091611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.091980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.091990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.092300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.092309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.092485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.092495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.092862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.092873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.093188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.093197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.093507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.093519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.093834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.093843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.797 qpair failed and we were unable to recover it. 00:38:33.797 [2024-12-07 11:50:33.094179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.797 [2024-12-07 11:50:33.094189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.094494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.094504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.094812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.094822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.095018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.095029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.095274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.095284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.095593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.095603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.095914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.095923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.096108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.096119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.096471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.096481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.096774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.096784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.097109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.097119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.097429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.097438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.097763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.097772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.097963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.097972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.098326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.098336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.098648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.098657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.098967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.098977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.099367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.099376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.099560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.099574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.099649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.099659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.100020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.100030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.100336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.100345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.100629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.100639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.100953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.100962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.101344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.101353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.101677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.101687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.101871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.101882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.102155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.102165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.102476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.102485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.102798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.102808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.103005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.103019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.103338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.103347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.103635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.103644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.103962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.103972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.798 qpair failed and we were unable to recover it. 00:38:33.798 [2024-12-07 11:50:33.104347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.798 [2024-12-07 11:50:33.104356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.799 qpair failed and we were unable to recover it. 00:38:33.799 [2024-12-07 11:50:33.104503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.799 [2024-12-07 11:50:33.104513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.799 qpair failed and we were unable to recover it. 00:38:33.799 [2024-12-07 11:50:33.104804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.799 [2024-12-07 11:50:33.104813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.799 qpair failed and we were unable to recover it. 00:38:33.799 [2024-12-07 11:50:33.105133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.799 [2024-12-07 11:50:33.105142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.799 qpair failed and we were unable to recover it. 00:38:33.799 [2024-12-07 11:50:33.105335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.799 [2024-12-07 11:50:33.105346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.799 qpair failed and we were unable to recover it. 00:38:33.799 [2024-12-07 11:50:33.105743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.799 [2024-12-07 11:50:33.105753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.799 qpair failed and we were unable to recover it. 00:38:33.799 [2024-12-07 11:50:33.106041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.799 [2024-12-07 11:50:33.106051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.799 qpair failed and we were unable to recover it. 00:38:33.799 [2024-12-07 11:50:33.106364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.799 [2024-12-07 11:50:33.106374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.799 qpair failed and we were unable to recover it. 00:38:33.799 [2024-12-07 11:50:33.106725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.799 [2024-12-07 11:50:33.106735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.799 qpair failed and we were unable to recover it. 00:38:33.799 [2024-12-07 11:50:33.107031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.799 [2024-12-07 11:50:33.107041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.799 qpair failed and we were unable to recover it. 00:38:33.799 [2024-12-07 11:50:33.107359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.799 [2024-12-07 11:50:33.107368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.799 qpair failed and we were unable to recover it. 00:38:33.799 [2024-12-07 11:50:33.107662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.799 [2024-12-07 11:50:33.107672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:33.799 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.107986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.107997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.108293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.108303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.108631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.108641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.108938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.108948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.109324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.109333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.109536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.109545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.109920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.109929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.110225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.110241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.110457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.110466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.110787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.110802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.111109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.111120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.111325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.111334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.111665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.111674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.112000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.112021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.112213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.112224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.112533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.112543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.112851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.112861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.113196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.113206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.113511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.113520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.113832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.113843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.114030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.114040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.114358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.114367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.114551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.114561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.114732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.114742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.115030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.115041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.115358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.115367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.115526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.115536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.115794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.115804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.116211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.116220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.116512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.073 [2024-12-07 11:50:33.116529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.073 qpair failed and we were unable to recover it. 00:38:34.073 [2024-12-07 11:50:33.116824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.116833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.117105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.117115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.117309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.117320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.117634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.117644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.117824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.117833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.118251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.118261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.118452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.118465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.118774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.118783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.119075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.119084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.119384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.119393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.119496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.119505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.119603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.119613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.119961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.119971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.120176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.120185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.120453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.120462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.120781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.120791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.121082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.121091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.121310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.121319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.121420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.121430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.121619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.121629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.121677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.121687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.121985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.121995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.122321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.122331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.122727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.122736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.123088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.123098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.123394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.123404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.123691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.123701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.123744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.123753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.124065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.124075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.124379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.124391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.124699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.124709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.124879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.124889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.125190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.125200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.125499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.125514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.125796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.125805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.126074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.126084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.126404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.126414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.074 [2024-12-07 11:50:33.126776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.074 [2024-12-07 11:50:33.126785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.074 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.127086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.127096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.127420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.127429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.127717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.127727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.128041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.128050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.128418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.128427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.128749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.128759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.129077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.129087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.129300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.129309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.129603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.129612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.129918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.129927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.130331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.130341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.130653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.130662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.130968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.130978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.131282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.131293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.131598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.131607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.131893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.131903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.132075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.132084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.132415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.132425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.132626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.132636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.132966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.132976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.133325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.133335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.133639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.133649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.133858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.133868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.134177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.134186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.134478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.134487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.134791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.134800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.135005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.135019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.135338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.135347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.135715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.135724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.136030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.136040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.136353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.136362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.136658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.136669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.136968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.136993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.137302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.137312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.137618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.137628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.137902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.137911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.138205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.138214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.075 [2024-12-07 11:50:33.138535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.075 [2024-12-07 11:50:33.138544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.075 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.138866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.138876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.139178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.139187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.139530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.139539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.139847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.139857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.140033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.140044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.140345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.140355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.140557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.140566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.140876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.140885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.141172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.141181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.141495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.141504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.141791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.141801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.142118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.142127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.142297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.142307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.142604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.142614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.142899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.142909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.143210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.143220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.143540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.143550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.143853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.143863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.144065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.144075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.144426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.144435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.144763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.144772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.145057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.145067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.145267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.145276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.145517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.145527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.145829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.145838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.146119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.146129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.146424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.146433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.146737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.146747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.147052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.147062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.147432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.147441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.147772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.147781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.148078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.148087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.148400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.148409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.148596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.148608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.148881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.148891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.149102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.149112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.149319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.149329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.076 qpair failed and we were unable to recover it. 00:38:34.076 [2024-12-07 11:50:33.149644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.076 [2024-12-07 11:50:33.149654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.149937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.149946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.150266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.150276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.150590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.150599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.150801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.150811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.151107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.151116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.151433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.151443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.151766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.151775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.152006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.152020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.152320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.152329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.152647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.152656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.152963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.152973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.153254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.153264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.153457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.153468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.153739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.153748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.153932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.153943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.154254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.154264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.154590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.154599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.154926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.154936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.155233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.155243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.155558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.155569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.155886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.155900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.156175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.156185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.156358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.156368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.156577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.156586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.156894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.156904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.157301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.157312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.157614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.157624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.157921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.157932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.158248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.158259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.158563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.158573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.158886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.158896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.159308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.077 [2024-12-07 11:50:33.159318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.077 qpair failed and we were unable to recover it. 00:38:34.077 [2024-12-07 11:50:33.159509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.159519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.159828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.159838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.160024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.160034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.160362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.160374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.160667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.160676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.160997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.161006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.161304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.161314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.161587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.161596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.161892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.161902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.162177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.162187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.162482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.162492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.162801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.162810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.163100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.163110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.163474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.163483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.163768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.163778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.164119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.164129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.164443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.164453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.164666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.164675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.164980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.164990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.165270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.165280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.165573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.165582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.165889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.165906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.166123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.166133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.166403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.166412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.166603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.166612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.166885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.166895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.167178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.167187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.167396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.167405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.167687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.167696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.168018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.168027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.168199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.168209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.168488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.168498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.168807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.168818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.169005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.169019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.169315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.169324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.169701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.169710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.170000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.170009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.170331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.078 [2024-12-07 11:50:33.170340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.078 qpair failed and we were unable to recover it. 00:38:34.078 [2024-12-07 11:50:33.170655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.170664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.170971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.170980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.171366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.171376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.171648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.171658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.171972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.171982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.172266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.172278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.172480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.172489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.172813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.172823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.173125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.173135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.173440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.173451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.173738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.173748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.174135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.174145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.174460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.174469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.174782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.174792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.175084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.175099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.175408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.175417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.175750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.175759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.176073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.176083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.176445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.176454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.176817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.176826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.177090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.177099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.177307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.177317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.177472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.177482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.177638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.177648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.177934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.177944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.178269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.178278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.178566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.178575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.178890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.178900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.179199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.179210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.179512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.179521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.179811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.179821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.180096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.180105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.180272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.180283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.180493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.180503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.180835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.180846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.181165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.181175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.181478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.181487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.181870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.079 [2024-12-07 11:50:33.181880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.079 qpair failed and we were unable to recover it. 00:38:34.079 [2024-12-07 11:50:33.182175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.182184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.182525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.182535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.182801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.182811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.183119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.183129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.183325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.183335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.183605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.183614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.183924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.183934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.184238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.184249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.184407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.184418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.184623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.184633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.184939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.184948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.185256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.185265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.185566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.185575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.185885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.185894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.186102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.186111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.186442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.186451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.186791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.186801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.187112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.187122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.187337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.187346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.187549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.187558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.187838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.187848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.188173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.188183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.188485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.188494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.188664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.188675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.188836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.188846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.189175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.189185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.189527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.189536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.189824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.189841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.190140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.190150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.190439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.190456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.190721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.190730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.191037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.191047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.191419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.191428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.191710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.191726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.191915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.191925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.192205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.192214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.192512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.192521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.192829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.080 [2024-12-07 11:50:33.192838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.080 qpair failed and we were unable to recover it. 00:38:34.080 [2024-12-07 11:50:33.193162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.193172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.193488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.193499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.193859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.193873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.194172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.194182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.194474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.194483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.194788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.194797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.195091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.195101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.195421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.195429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.195720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.195729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.196041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.196054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.196350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.196360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.196707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.196716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.196891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.196901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.197186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.197195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.197512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.197521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.197825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.197834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.198132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.198142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.198456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.198465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.198815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.198824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.199146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.199156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.199362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.199371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.199770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.199780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.200068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.200078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.200392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.200404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.200715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.200725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.200936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.200946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.201247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.201257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.201570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.201580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.201885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.201894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.202093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.202103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.202437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.202447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.202836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.202845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.203114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.203124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.203440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.203450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.203738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.203748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.204024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.204035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.204215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.204225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.081 [2024-12-07 11:50:33.204491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.081 [2024-12-07 11:50:33.204500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.081 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.204828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.204837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.205145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.205155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.205474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.205484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.205668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.205677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.205971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.205980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.206197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.206206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.206484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.206493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.206808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.206818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.207119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.207128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.207443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.207452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.207659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.207668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.208063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.208075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.208342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.208351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.208657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.208666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.208973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.208982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.209287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.209297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.209588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.209597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.209904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.209913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.210236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.210247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.210632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.210640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.210934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.210943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.211160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.211172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.211370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.211379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.211661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.211670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.211964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.211973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.212262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.212271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.212556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.212566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.212885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.212895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.213063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.213077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.213315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.213325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.213542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.213551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.213879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.213888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.214181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.214190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.082 qpair failed and we were unable to recover it. 00:38:34.082 [2024-12-07 11:50:33.214358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.082 [2024-12-07 11:50:33.214368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.214712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.214721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.215041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.215051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.215385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.215395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.215707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.215717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.216018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.216029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.216262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.216272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.216682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.216691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.216973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.216982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.217290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.217299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.217618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.217627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.217917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.217926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.218109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.218119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.218496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.218505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.218821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.218830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.219008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.219027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.219327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.219336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.219695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.219704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.219918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.219929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.220223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.220233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.220405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.220415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.220763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.220774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.221080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.221091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.221398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.221407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.221720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.221729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.222054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.222063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.222362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.222377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.222675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.222684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.222991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.223000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.223219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.223229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.223460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.223469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.223834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.223843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.224176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.224186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.224506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.224515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.224800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.224810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.225218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.225227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.225531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.225541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.225859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.083 [2024-12-07 11:50:33.225868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.083 qpair failed and we were unable to recover it. 00:38:34.083 [2024-12-07 11:50:33.226175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.226184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.226489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.226499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.226808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.226817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.227212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.227221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.227536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.227545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.227835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.227844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.228143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.228153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.228319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.228330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.228649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.228658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.228941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.228950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.229273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.229282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.229586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.229595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.229903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.229912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.230097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.230107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.230391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.230400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.230705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.230715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.231042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.231052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.231390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.231399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.231685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.231696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.232045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.232055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.232345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.232356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.232652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.232665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.232950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.232959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.233281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.233291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.233591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.233600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.233887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.233897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.234308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.234318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.234523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.234532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.234772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.234787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.235088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.235097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.235438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.235447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.235752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.235761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.236069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.236078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.236388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.236397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.236506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.236515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.236782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.236791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.236974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.236985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.237345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.084 [2024-12-07 11:50:33.237354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.084 qpair failed and we were unable to recover it. 00:38:34.084 [2024-12-07 11:50:33.237645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.237655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.237957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.237966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.238250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.238260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.238535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.238545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.238876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.238888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.239189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.239199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.239489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.239503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.239812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.239821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.240033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.240042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.240242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.240251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.240531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.240540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.240707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.240717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.240986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.240996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.241301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.241311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.241619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.241628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.241811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.241821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.242131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.242141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.242467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.242476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.242782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.242791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.242992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.243001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.243212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.243222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.243539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.243548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.243827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.243838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.244146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.244156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.244455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.244473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.244800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.244809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.245034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.245043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.245363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.245372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.245682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.245696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.245999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.246008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.246341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.246351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.246660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.246669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.246958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.246968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.247281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.247291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.247579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.247589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.247896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.247905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.248150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.248160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.248322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.085 [2024-12-07 11:50:33.248332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.085 qpair failed and we were unable to recover it. 00:38:34.085 [2024-12-07 11:50:33.248637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.248646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.248934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.248943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.249232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.249242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.249450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.249459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.249663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.249673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.249989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.249998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.250302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.250311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.250510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.250519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.250660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.250673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.251019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.251032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.251437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.251446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.251726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.251743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.252049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.252059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.252380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.252389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.252681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.252690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.252989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.252998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.253286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.253296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.253606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.253615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.253902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.253911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.254233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.254242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.254551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.254560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.254718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.254729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.255009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.255021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.255327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.255336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.255654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.255665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.255932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.255942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.256256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.256265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.256549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.256558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.256742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.256752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.257065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.257074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.257392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.257401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.257708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.257717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.258046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.258055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.258362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.258372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.086 [2024-12-07 11:50:33.258667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.086 [2024-12-07 11:50:33.258676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.086 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.258982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.258991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.259309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.259319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.259634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.259643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.259948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.259958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.260150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.260160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.260507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.260516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.260822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.260831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.261137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.261147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.261437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.261447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.261746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.261756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.262051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.262061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.262402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.262411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.262723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.262732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.262919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.262929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.263121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.263131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.263436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.263446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.263760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.263771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.263945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.263955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.264331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.264341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.264628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.264637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.264947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.264956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.265239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.265249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.265422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.265433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.265740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.265750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.266057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.266067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.266286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.266295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.266662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.266672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.267005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.267018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.267313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.267322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.267632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.267641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.267842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.267851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.268031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.268041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.268353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.268362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.268656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.268665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.268978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.268988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.269166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.269177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.269351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.269360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.087 qpair failed and we were unable to recover it. 00:38:34.087 [2024-12-07 11:50:33.269677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.087 [2024-12-07 11:50:33.269686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.269995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.270007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.270388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.270397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.270707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.270717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.270931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.270941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.271258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.271268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.271566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.271576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.271785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.271795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.272140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.272150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.272455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.272465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.272638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.272648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.272927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.272937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.273222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.273231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.273452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.273462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.273786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.273795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.274108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.274118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.274434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.274444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.274754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.274763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.275052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.275062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.275367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.275378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.275682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.275692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.276032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.276045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.276325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.276334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.276611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.276620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.276807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.276817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.277091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.277100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.277406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.277415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.277687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.277696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.278004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.278016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.278309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.278318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.278622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.278631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.278840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.278849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.279180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.279189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.279470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.279480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.279778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.279787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.280161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.280171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.280452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.280461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.280728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.088 [2024-12-07 11:50:33.280738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.088 qpair failed and we were unable to recover it. 00:38:34.088 [2024-12-07 11:50:33.281037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.281047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.281381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.281390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.281684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.281693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.281888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.281898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.282191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.282201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.282518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.282528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.282723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.282733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.282926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.282935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.283207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.283216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.283537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.283546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.283749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.283758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.284059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.284068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.284353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.284363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.284674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.284683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.284986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.284996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.285207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.285217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.285536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.285545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.285861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.285870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.286169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.286178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.286394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.286403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.286707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.286716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.287027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.287038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.287349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.287366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.287663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.287672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.287972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.287988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.288310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.288321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.288629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.288640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.288916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.288930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.289228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.289237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.289549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.289558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.289868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.289878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.290167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.290177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.290481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.290490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.290680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.290690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.290998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.291007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.291348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.291358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.291702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.291710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.291869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.089 [2024-12-07 11:50:33.291879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.089 qpair failed and we were unable to recover it. 00:38:34.089 [2024-12-07 11:50:33.292163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.292174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.292485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.292494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.292695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.292704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.293033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.293044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.293357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.293366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.293675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.293684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.293972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.293981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.294291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.294301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.294526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.294535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.294719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.294730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.295065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.295074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.295365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.295381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.295684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.295693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.295938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.295947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.296163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.296173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.296482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.296491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.296798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.296807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.297113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.297122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.297414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.297431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.297739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.297748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.298054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.298064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.298349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.298358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.298669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.298678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.298984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.298995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.299178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.299189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.299529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.299538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.299822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.299836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.300138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.300148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.300463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.300472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.300741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.300750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.301034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.301044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.301364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.301373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.301690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.301700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.302006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.302019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.302302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.302311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.302632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.302642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.302950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.302959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.303243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.090 [2024-12-07 11:50:33.303252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.090 qpair failed and we were unable to recover it. 00:38:34.090 [2024-12-07 11:50:33.303579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.303589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.303895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.303905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.304216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.304225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.304488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.304497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.304695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.304705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.305021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.305031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.305350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.305359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.305668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.305677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.305965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.305974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.306283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.306293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.306451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.306461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.306774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.306783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.307075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.307084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.307443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.307452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.307645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.307654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.308004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.308020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.308307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.308317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.308725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.308734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.308932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.308941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.309071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.309081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.309405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.309414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.309702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.309712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.310100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.310110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.310418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.310428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.310628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.310637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.310930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.310941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.311154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.311164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.311472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.311481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.311775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.311791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.312124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.312134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.312461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.312471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.312765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.312774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.313080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.313090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.313270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.091 [2024-12-07 11:50:33.313279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.091 qpair failed and we were unable to recover it. 00:38:34.091 [2024-12-07 11:50:33.313574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.313583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.313836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.313856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.314081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.314091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.314458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.314468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.314758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.314774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.314964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.314974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.315257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.315267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.315596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.315605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.315913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.315923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.316222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.316232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.316500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.316510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.316683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.316693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.316919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.316928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.317224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.317234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.317551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.317560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.317864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.317873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.318070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.318080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.318360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.318369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.318622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.318632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.318793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.318803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.319083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.319093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.319299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.319308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.319516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.319526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.319843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.319852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.320088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.320097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.320283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.320294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.320626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.320636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.320959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.320969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.321252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.321262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.321494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.321503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.321706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.321715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.322030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.322042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.322371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.322380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.322574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.322584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.322859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.322868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.323072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.323082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.323448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.323457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.323773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.092 [2024-12-07 11:50:33.323782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.092 qpair failed and we were unable to recover it. 00:38:34.092 [2024-12-07 11:50:33.324065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.324074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.324303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.324312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.324642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.324651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.324858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.324867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.325177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.325187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.325489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.325504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.325685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.325694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.325988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.326000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.326287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.326297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.326631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.326641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.326922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.326930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.327121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.327131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.327466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.327475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.327787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.327796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.327957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.327967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.328316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.328325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.328616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.328626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.328792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.328802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.328997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.329007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.329345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.329354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.329755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.329765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.330059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.330069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.330431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.330440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.330774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.330783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.330867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.330875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.331259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.331269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.331609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.331618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.331814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.331824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.332179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.332188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.332365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.332374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.332693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.332702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.332985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.332995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.333056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.333066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.333353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.333364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.333686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.333695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.333992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.334001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.334171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.334181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.334554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.093 [2024-12-07 11:50:33.334563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.093 qpair failed and we were unable to recover it. 00:38:34.093 [2024-12-07 11:50:33.334896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.334906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.335228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.335238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.335437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.335446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.335726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.335735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.336072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.336082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.336468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.336478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.336775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.336785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.337087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.337096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.337391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.337406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.337742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.337752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.337938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.337948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.338283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.338293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.338613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.338622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.338928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.338938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.339298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.339307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.339619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.339628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.339802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.339812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.339967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.339977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.340352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.340361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.340677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.340687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.341005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.341020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.341311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.341321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.341691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.341700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.341980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.341989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.342289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.342298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.342582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.342592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.342901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.342910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.343274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.343284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.343598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.343607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.343903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.343913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.344230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.344239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.344549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.344558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.344881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.344891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.345114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.345127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.345451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.345461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.345670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.345682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.346005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.346018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.094 [2024-12-07 11:50:33.346319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.094 [2024-12-07 11:50:33.346328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.094 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.346614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.346623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.346933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.346942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.347285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.347295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.347593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.347602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.347909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.347919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.348240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.348249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.348557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.348566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.348729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.348739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.348901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.348910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.349130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.349140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.349437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.349446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.349779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.349789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.350109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.350118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.350340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.350349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.350561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.350570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.350901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.350910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.351236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.351246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.351554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.351563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.351869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.351878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.352065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.352075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.352298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.352307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.352628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.352638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.352945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.352955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.353303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.353313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.353506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.353515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.353798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.353807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.354020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.354029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.354304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.354313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.354617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.354633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.354952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.354961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.355265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.355274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.355607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.355617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.355887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.355897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.356106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.356115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.095 qpair failed and we were unable to recover it. 00:38:34.095 [2024-12-07 11:50:33.356373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.095 [2024-12-07 11:50:33.356382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.356698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.356707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.357021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.357031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.357331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.357342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.357547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.357556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.357859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.357869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.358058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.358067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.358161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.358170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.358429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.358439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.358744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.358753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.359050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.359059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.359353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.359362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.359691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.359701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.360017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.360026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.360322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.360332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.360536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.360545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.360742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.360751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.360940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.360949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.361298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.361308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.361495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.361505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.361891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.361900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.362225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.362235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.362435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.362445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.362636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.362646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.362909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.362918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.363245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.363258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.363457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.363466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.363780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.363790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.364029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.364040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.364271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.364280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.364449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.364459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.364816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.364825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.365139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.365149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.365316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.365325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.365618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.365627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.365914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.365923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.366238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.366248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.366575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.096 [2024-12-07 11:50:33.366585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.096 qpair failed and we were unable to recover it. 00:38:34.096 [2024-12-07 11:50:33.366779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.366788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.367061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.367070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.367407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.367417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.367590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.367600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.367915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.367925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.368223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.368236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.368528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.368537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.368751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.368760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.369108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.369118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.369313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.369330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.369499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.369508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.369779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.369789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.370180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.370189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.370497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.370512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.370821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.370830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.371139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.371149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.371445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.371454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.371765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.371774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.372078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.372087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.372312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.372322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.372595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.372612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.372792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.372802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.373084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.373093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.373148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.373157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.373480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.373489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.373807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.373816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.374148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.374157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.374446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.374456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.374764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.374774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.375078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.375088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.375258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.375267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.375536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.375545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.375851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.375860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.376030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.376040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.376242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.376251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.376563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.376573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.376659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.376669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.097 [2024-12-07 11:50:33.376933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.097 [2024-12-07 11:50:33.376942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.097 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.377215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.377225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.377538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.377547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.377886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.377895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.378053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.378063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.378459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.378468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.378750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.378764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.379055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.379065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.379391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.379402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.379706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.379716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.380038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.380047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.380360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.380369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.380498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.380508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.380814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.380824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.381142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.381155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.381468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.381477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.381785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.381794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.382125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.382135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.382302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.382312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.382449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.382459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.382634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.382643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.382957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.382966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.383294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.383304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.383603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.383612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.383929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.383938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.384259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.384269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.384553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.384562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.384846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.384855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.385162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.385172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.385371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.385380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.385680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.385690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.385995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.386005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.386373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.386382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.386702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.386711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.387015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.387025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.387333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.387343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.387647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.387656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.387966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.387976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.098 [2024-12-07 11:50:33.388285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.098 [2024-12-07 11:50:33.388294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.098 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.388600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.388609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.388771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.388782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.389082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.389091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.389415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.389424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.389733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.389742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.390032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.390042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.390240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.390250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.390554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.390563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.390869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.390879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.391263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.391275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.391593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.391603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.391903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.391913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.392193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.392203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.392508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.392556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.392725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.392734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.392937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.392946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.393129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.393139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.393473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.393482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.393814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.393823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.394121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.394131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.394439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.394448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.394651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.394661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.394958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.394967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.395261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.395279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.395596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.395605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.395907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.395917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.396243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.396253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.396470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.396479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.396654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.396664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.396977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.396987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.397301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.397311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.397665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.397675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.397986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.397996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.398295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.398305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.398583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.398594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.398915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.398925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.399109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.399120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.099 [2024-12-07 11:50:33.399443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.099 [2024-12-07 11:50:33.399453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.099 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.399754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.399765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.400077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.400092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.400423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.400433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.400743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.400752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.401055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.401064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.401387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.401397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.401709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.401718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.402022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.402032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.402374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.402383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.402687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.402696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.402934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.402943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.403245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.403257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.403561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.403570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.403854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.403864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.404165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.404176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.404485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.404495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.404697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.404707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.404898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.404908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.405214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.405224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.405499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.405508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.405818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.405827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.406186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.406196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.406499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.406516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.406817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.406826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.407132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.407142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.407480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.407489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.407801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.407810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.408126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.408136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.408435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.408445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.408635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.408645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.408954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.408963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.100 [2024-12-07 11:50:33.409155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.100 [2024-12-07 11:50:33.409165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.100 qpair failed and we were unable to recover it. 00:38:34.101 [2024-12-07 11:50:33.409438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.101 [2024-12-07 11:50:33.409448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.101 qpair failed and we were unable to recover it. 00:38:34.101 [2024-12-07 11:50:33.409744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.101 [2024-12-07 11:50:33.409753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.101 qpair failed and we were unable to recover it. 00:38:34.375 [2024-12-07 11:50:33.410066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.375 [2024-12-07 11:50:33.410077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.375 qpair failed and we were unable to recover it. 00:38:34.375 [2024-12-07 11:50:33.410385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.375 [2024-12-07 11:50:33.410395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.410704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.410714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.411008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.411022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.411202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.411212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.411532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.411540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.411822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.411839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.412053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.412062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.412326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.412335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.412640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.412649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.412962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.412971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.413168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.413178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.413540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.413550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.413816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.413825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.414178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.414187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.414512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.414521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.414828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.414837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.415139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.415149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.415543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.415552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.415842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.415852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.416127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.416138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.416457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.416467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.416771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.416780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.417072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.417082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.417286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.417295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.417651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.417660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.417963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.417972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.418271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.418281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.418468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.418478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.418788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.418798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.419124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.419134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.419435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.419448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.419741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.419750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.420041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.420051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.420341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.420351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.420688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.420697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.421006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.421020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.421243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.376 [2024-12-07 11:50:33.421253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-12-07 11:50:33.421426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.421436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.421747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.421756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.422067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.422076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.422280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.422289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.422690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.422699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.423024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.423034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.423344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.423355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.423559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.423568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.423898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.423907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.424220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.424230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.424539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.424548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.424855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.424865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.425165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.425174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.425476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.425486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.425791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.425800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.426090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.426100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.426407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.426416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.426721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.426731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.427034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.427044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.427351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.427361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.427665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.427674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.427856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.427865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.428165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.428174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.428498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.428507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.428822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.428831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.429194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.429203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.429424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.429433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.429698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.429708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.430025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.430035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.430381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.430390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.430693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.430702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.431019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.431028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.431337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.431346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.431549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.431558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.431872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.431881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.432193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.432203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.432504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.432513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-12-07 11:50:33.432794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.377 [2024-12-07 11:50:33.432810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.433118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.433128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.433314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.433323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.433586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.433595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.433918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.433927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.434112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.434121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.434456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.434464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.434771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.434781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.435105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.435114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.435407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.435418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.435727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.435736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.436042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.436052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.436378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.436388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.436711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.436720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.437034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.437043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.437210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.437220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.437555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.437566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.437875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.437885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.438224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.438233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.438520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.438531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.438826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.438849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.439135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.439145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.439459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.439469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.439767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.439776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.440067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.440077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.440277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.440287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.440425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.440434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.440728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.440737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.441057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.441066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.441374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.441382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.441688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.441698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.442002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.442015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.442312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.442321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.442650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.442659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.443008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.443020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.443305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.443314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.443648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.443657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.443850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.443859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.378 qpair failed and we were unable to recover it. 00:38:34.378 [2024-12-07 11:50:33.444130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.378 [2024-12-07 11:50:33.444140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.444451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.444460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.444786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.444795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.445075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.445084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.445404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.445413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.445712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.445729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.446063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.446072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.446385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.446395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.446701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.446710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.447022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.447032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.447358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.447367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.447748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.447759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.448037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.448046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.448438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.448447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.448739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.448749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.449062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.449072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.449378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.449387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.449670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.449679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.449975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.449984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.450340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.450350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.450628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.450637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.450966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.450975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.451178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.451187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.451505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.451514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.451792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.451801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.452025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.452035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.452211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.452220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.452586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.452595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.452904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.452914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.453211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.453220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.453513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.453528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.453707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.453718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.454046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.454056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.454234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.454244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.454564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.454573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.454860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.454870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.455076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.455086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.379 [2024-12-07 11:50:33.455404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.379 [2024-12-07 11:50:33.455413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.379 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.455760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.455769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.456163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.456172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.456405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.456415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.456688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.456697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.456987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.456997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.457309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.457318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.457602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.457612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.457936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.457946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.458228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.458241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.458551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.458560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.458866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.458876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.459177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.459186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.459477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.459493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.459677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.459688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.459960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.459969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.460144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.460154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.460459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.460469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.460846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.460855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.461121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.461131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.461461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.461471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.461759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.461769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.462084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.462094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.462408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.462417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.462747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.462757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.463058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.463067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.463384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.463393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.463676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.463694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.464033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.464043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.464414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.464423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.464724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.464733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.465041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.465050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.465346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.465364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.380 [2024-12-07 11:50:33.465652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.380 [2024-12-07 11:50:33.465662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.380 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.465831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.465841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.466191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.466200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.466501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.466511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.466813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.466822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.467129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.467138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.467446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.467455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.467761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.467770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.468095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.468105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.468284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.468293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.468649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.468658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.468970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.468979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.469283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.469292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.469602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.469611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.469807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.469816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.470083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.470093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.470301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.470309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.470660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.470670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.470980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.470989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.471288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.471298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.471590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.471599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.471895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.471906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.472225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.472234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.472403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.472413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.472781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.472790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.472979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.472988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.473331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.473340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.473652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.473661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.473949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.473958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.474254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.474264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.474577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.474587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.474925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.474935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.475241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.475251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.475555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.475564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.475845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.475861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.476162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.476172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.476483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.476492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.476804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.476813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.381 qpair failed and we were unable to recover it. 00:38:34.381 [2024-12-07 11:50:33.477123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.381 [2024-12-07 11:50:33.477134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.477496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.477506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.477694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.477707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.478075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.478084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.478392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.478402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.478705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.478715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.479003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.479016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.479295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.479304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.479612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.479621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.479923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.479933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.480203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.480213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.480524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.480534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.480841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.480850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.481158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.481167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.481484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.481493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.481801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.481811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.482122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.482132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.482440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.482449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.482737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.482746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.482905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.482915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.483293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.483302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.483610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.483620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.483816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.483826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.484013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.484025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.484325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.484335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.484658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.484667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.484960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.484969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.485290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.485299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.485512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.485521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.485832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.485841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.486138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.486147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.486476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.486485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.486687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.486696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.487032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.487041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.487276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.487285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.487606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.487615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.488004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.488016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.382 [2024-12-07 11:50:33.488335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.382 [2024-12-07 11:50:33.488345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.382 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.488616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.488625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.488926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.488936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.489113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.489123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.489431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.489441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.489728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.489738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.490042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.490053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.490379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.490388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.490573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.490582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.490797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.490807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.491118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.491128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.491435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.491445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.491758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.491767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.492052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.492062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.492375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.492384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.492692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.492701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.493007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.493019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.493327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.493337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.493649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.493658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.493962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.493971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.494282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.494291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.494581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.494591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.494908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.494917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.495124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.495133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.495466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.495476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.495766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.495776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.496155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.496167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.496468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.496478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.496782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.496792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.497077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.497090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.497397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.497407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.497574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.497585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.497912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.497921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.498097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.498106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.498459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.498468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.498757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.498766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.499078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.499087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.499390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.383 [2024-12-07 11:50:33.499399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.383 qpair failed and we were unable to recover it. 00:38:34.383 [2024-12-07 11:50:33.499705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.499714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.500020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.500029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.500352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.500361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.500663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.500672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.500835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.500846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.501035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.501046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.501320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.501329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.501456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.501466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.501738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.501747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.502054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.502067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.502348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.502358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.502645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.502654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.502947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.502957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.503263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.503272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.503552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.503569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.503865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.503875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.504182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.504191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.504475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.504484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.504747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.504756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.505045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.505055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.505393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.505402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.505704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.505713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.506016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.506026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.506361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.506370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.506559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.506568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.506854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.506863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.507169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.507179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.507486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.507496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.507691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.507702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.508014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.508023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.508200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.508209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.508534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.508543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.508865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.508875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.509183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.509193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.509566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.509575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.509872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.509881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.510082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.510098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.384 [2024-12-07 11:50:33.510415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.384 [2024-12-07 11:50:33.510424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.384 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.510732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.510742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.511030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.511039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.511321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.511330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.511637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.511647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.511952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.511962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.512253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.512263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.512568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.512578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.512893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.512903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.513210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.513220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.513524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.513533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.513839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.513849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.514142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.514152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.514473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.514482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.514773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.514782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.515087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.515096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.515412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.515420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.515577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.515587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.515968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.515981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.516186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.516196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.516526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.516535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.516811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.516820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.517145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.517154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.517339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.517349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.517658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.517668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.518057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.518067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.518357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.518366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.518753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.518762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.519047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.519057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.519331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.519340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.519648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.519657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.519979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.519990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.520276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.520287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.520630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.520639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.385 [2024-12-07 11:50:33.520931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.385 [2024-12-07 11:50:33.520940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.385 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.521138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.521148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.521445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.521454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.521665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.521674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.521937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.521947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.522276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.522286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.522583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.522593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.522869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.522880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.523184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.523193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.523502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.523512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.523821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.523830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.524107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.524116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.524428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.524437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.524710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.524719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.525026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.525035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.525228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.525238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.525542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.525552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.525855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.525864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.526207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.526217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.526530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.526539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.526827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.526836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.527146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.527156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.527460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.527469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.527778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.527787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.528074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.528083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.528397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.528407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.528688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.528698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.529016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.529025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.529313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.529322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.529631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.529640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.529966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.529975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.530324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.530333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.530633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.530642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.531026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.531036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.531217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.531227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.531533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.531542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.531844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.531853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.532166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.532179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.386 qpair failed and we were unable to recover it. 00:38:34.386 [2024-12-07 11:50:33.532505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.386 [2024-12-07 11:50:33.532514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.532796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.532811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.533087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.533096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.533414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.533423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.533619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.533628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.533846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.533855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.534255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.534265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.534425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.534435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.534616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.534625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.534837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.534846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.535165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.535179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.535378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.535387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.535712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.535721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.535992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.536002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.536300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.536311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.536616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.536627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.536973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.536982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.537307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.537316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.537601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.537619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.537917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.537926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.538089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.538100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.538439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.538448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.538761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.538771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2789679 Killed "${NVMF_APP[@]}" "$@" 00:38:34.387 [2024-12-07 11:50:33.538947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.538958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.539241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.539251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.539419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.539429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:34.387 [2024-12-07 11:50:33.539750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.539761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.540058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:34.387 [2024-12-07 11:50:33.540069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:34.387 [2024-12-07 11:50:33.540359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.540371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:34.387 [2024-12-07 11:50:33.540688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.540698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:34.387 [2024-12-07 11:50:33.540993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.541003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.541305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.541319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.541652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.541662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.541949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.541960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.542259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.542269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.387 qpair failed and we were unable to recover it. 00:38:34.387 [2024-12-07 11:50:33.542573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.387 [2024-12-07 11:50:33.542583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.542890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.542899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.543094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.543104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.543393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.543403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.543780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.543789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.543951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.543962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.544382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.544392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.544691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.544701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.545013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.545024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.545336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.545347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.545630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.545647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.545955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.545965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.546243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.546253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.546551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.546560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.546856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.546865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.547169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.547179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.547305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.547316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.547662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.547671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.547996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.548006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.548318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.548327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2790713 00:38:34.388 [2024-12-07 11:50:33.548616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.548628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2790713 00:38:34.388 [2024-12-07 11:50:33.548834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.548845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:34.388 [2024-12-07 11:50:33.549162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.549174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2790713 ']' 00:38:34.388 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:34.388 [2024-12-07 11:50:33.549486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.549497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:34.388 [2024-12-07 11:50:33.549687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.549698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.549833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.549843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:34.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:34.388 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:34.388 [2024-12-07 11:50:33.550303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.550314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 11:50:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:34.388 [2024-12-07 11:50:33.550607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.550618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.551133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.551151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.551445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.551455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.551644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.551653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.551914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.551925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.552250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.552261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.552477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.388 [2024-12-07 11:50:33.552488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.388 qpair failed and we were unable to recover it. 00:38:34.388 [2024-12-07 11:50:33.552616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.552628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.552919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.552929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.553222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.553233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.553458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.553468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.553770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.553780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.554078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.554093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.554420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.554430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.554740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.554750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.555032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.555042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.555249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.555259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.555518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.555528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.555837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.555846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.556229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.556240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.556542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.556551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.556850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.556860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.557065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.557075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.557379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.557391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.557705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.557714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.558004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.558019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.558366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.558376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.558687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.558696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.558911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.558921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.559215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.559225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.559577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.559588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.559955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.559966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.560283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.560294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.560594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.560605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.560913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.560925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.561117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.561127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.561446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.561456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.561750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.561760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.562083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.562094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.562401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.562412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.562685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.389 [2024-12-07 11:50:33.562695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.389 qpair failed and we were unable to recover it. 00:38:34.389 [2024-12-07 11:50:33.562883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.562894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.563100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.563111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.563469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.563479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.563784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.563794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.564127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.564138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.564439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.564449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.564637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.564648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.564933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.564943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.565263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.565274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.565577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.565589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.565904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.565914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.566239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.566250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.566542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.566552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.566832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.566842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.567022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.567033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.567365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.567375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.567710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.567720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.567931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.567941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.568268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.568280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.568627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.568638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.568928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.568939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.569249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.569259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.569569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.569581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.569889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.569899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.570234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.570245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.570557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.570567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.570849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.570859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.570987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.570998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.571387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.571398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.571703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.571713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.571948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.571958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.572267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.572277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.572563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.572573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.572726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.572736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.573041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.573052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.573271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.573288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.573575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.573586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.390 qpair failed and we were unable to recover it. 00:38:34.390 [2024-12-07 11:50:33.573897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.390 [2024-12-07 11:50:33.573907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.574231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.574241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.574537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.574547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.574892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.574901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.575179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.575189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.575517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.575526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.575764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.575773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.575877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.575886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.576252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.576262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.576450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.576459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.576807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.576816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.577183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.577194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.577513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.577523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.577707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.577716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.577876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.577885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.578169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.578181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.578355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.578366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.578648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.578658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.578826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.578836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.579030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.579041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.579383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.579392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.579572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.579582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.579853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.579862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.580071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.580081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.580265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.580274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.580558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.580570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.580880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.580890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.581086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.581095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.581405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.581415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.581717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.581726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.582041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.582052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.582368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.582378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.582673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.582719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.582796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.582806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.583081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.583091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.583512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.583522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.583832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.583842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.584189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.391 [2024-12-07 11:50:33.584200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.391 qpair failed and we were unable to recover it. 00:38:34.391 [2024-12-07 11:50:33.584574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.584583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.584869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.584879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.585180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.585190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.585528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.585538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.585846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.585855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.586156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.586171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.586489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.586498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.586859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.586868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.587063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.587073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.587404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.587413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.587733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.587742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.587904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.587914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.588247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.588257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.588562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.588572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.588898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.588908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.589205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.589215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.589524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.589533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.589815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.589832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.590144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.590154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.590349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.590359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.590565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.590575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.590911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.590921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.591242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.591252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.591558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.591569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.591721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.591735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.591946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.591955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.592139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.592150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.592430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.592443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.592749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.592759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.593055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.593065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.593375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.593384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.593692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.593702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.594009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.594022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.594314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.594324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.594653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.594662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.594823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.594833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.595183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.595193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.392 [2024-12-07 11:50:33.595486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.392 [2024-12-07 11:50:33.595502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.392 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.595801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.595810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.596100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.596110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.596486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.596495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.596683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.596693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.597048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.597059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.597256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.597265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.597583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.597593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.597904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.597913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.598228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.598238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.598553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.598563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.598873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.598883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.599050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.599059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.599419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.599429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.599746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.599755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.600060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.600070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.600408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.600417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.600735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.600746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.601053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.601063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.601273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.601283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.601591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.601600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.601774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.601784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.602062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.602072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.602267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.602277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.602601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.602611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.602799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.602808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.603220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.603230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.603539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.603548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.603834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.603850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.604076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.604087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.604419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.604429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.604702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.604712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.605019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.605029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.605352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.605361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.605668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.605678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.605842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.605851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.606120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.606130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.606358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.393 [2024-12-07 11:50:33.606368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.393 qpair failed and we were unable to recover it. 00:38:34.393 [2024-12-07 11:50:33.606676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.606686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.606988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.606999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.607334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.607343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.607632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.607642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.607918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.607927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.608309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.608320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.608525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.608534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.608807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.608817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.609132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.609141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.609468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.609477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.609809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.609818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.609979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.609989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.610291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.610301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.610612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.610625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.610806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.610816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.611109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.611118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.611528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.611537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.611880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.611889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.612078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.612088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.612527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.612539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.612725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.612735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.612938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.612948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.613304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.613314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.613542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.613551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.613717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.613727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.614030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.614040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.614429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.614438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.614747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.614757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.615074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.615084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.615311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.615321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.615608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.615621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.615839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.615849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.616179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.616189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.616408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.616418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.394 qpair failed and we were unable to recover it. 00:38:34.394 [2024-12-07 11:50:33.616680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.394 [2024-12-07 11:50:33.616690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.616996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.617006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.617386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.617396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.617581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.617590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.617916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.617925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.618245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.618255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.618571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.618580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.618771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.618781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.619088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.619097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.619392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.619401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.619732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.619741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.620061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.620071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.620170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.620180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.620381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.620391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.620703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.620712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.621084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.621094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.621387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.621396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.621682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.621691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.621878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.621887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.622217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.622226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.622529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.622546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.622704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.622714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.623029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.623039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.623346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.623356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.623667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.623677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.624017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.624029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.624236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.624245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.624568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.624578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.624922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.624931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.625182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.625192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.625506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.625515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.625814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.625823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.625894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.625904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.626089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.626098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.626336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.626346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.626653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.626662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.626972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.626981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.627322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.395 [2024-12-07 11:50:33.627332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.395 qpair failed and we were unable to recover it. 00:38:34.395 [2024-12-07 11:50:33.627517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.627526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.627834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.627843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.628179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.628189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.628511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.628520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.628809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.628819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.629135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.629149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.629445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.629455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.629764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.629773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.629959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.629969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.630332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.630342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.630627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.630637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.630953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.630962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.631160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.631169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.631555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.631565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.631864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.631873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.632175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.632185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.632481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.632491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.632802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.632813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.633126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.633136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.633459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.633469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.633792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.633801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.633792] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:38:34.396 [2024-12-07 11:50:33.633848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.633859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.633896] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:34.396 [2024-12-07 11:50:33.634154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.634164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.634356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.634365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.634632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.634641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.634962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.634972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.635199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.635210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.635337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.635347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.635654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.635664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.635971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.635981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.636297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.636307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.636629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.636639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.636945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.636956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.637339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.637350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.637661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.637671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.637980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.637990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.638294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.396 [2024-12-07 11:50:33.638305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.396 qpair failed and we were unable to recover it. 00:38:34.396 [2024-12-07 11:50:33.638497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.638508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.638817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.638827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.639039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.639051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.639362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.639372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.639675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.639686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.639878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.639888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.640082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.640092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.640410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.640420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.640748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.640758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.641064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.641075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.641446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.641456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.641765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.641776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.641972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.641982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.642144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.642155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.642477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.642487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.642674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.642685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.643019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.643030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.643199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.643210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.643507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.643518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.643703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.643713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.643944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.643954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.644342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.644352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.644658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.644668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.644831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.644841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.645080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.645090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.645404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.645414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.645733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.645743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.646030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.646040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.646403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.646413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.646614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.646625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.646809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.646818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.647137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.647148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.647313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.647327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.647635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.647646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.647958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.647968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.648275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.648285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.648561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.648571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.397 [2024-12-07 11:50:33.648898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.397 [2024-12-07 11:50:33.648908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.397 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.649133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.649144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.649451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.649462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.649755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.649765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.650085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.650095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.650413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.650424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.650667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.650676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.650982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.650991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.651281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.651291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.651475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.651484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.651845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.651854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.652163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.652173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.652494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.652504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.652814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.652823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.653027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.653037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.653376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.653387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.653712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.653722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.653926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.653936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.654283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.654292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.654458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.654467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.654642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.654651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.654965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.654975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.655150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.655161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.655431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.655442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.655749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.655758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.656063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.656073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.656380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.656389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.656694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.656704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.657038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.657050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.657353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.657362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.657700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.657709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.657885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.657895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.658164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.658174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.658498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.658508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.658683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.398 [2024-12-07 11:50:33.658693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.398 qpair failed and we were unable to recover it. 00:38:34.398 [2024-12-07 11:50:33.658809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.658818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.659115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.659125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.659449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.659458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.659763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.659773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.660058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.660068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.660339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.660349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.660658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.660667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.660941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.660950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.661286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.661296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.661579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.661588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.661926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.661938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.662221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.662233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.662549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.662558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.662846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.662856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.663167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.663176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.663468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.663479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.663673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.663682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.663965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.663975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.664047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.664058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.664340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.664350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.664614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.664625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.664825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.664834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.665091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.665101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.665418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.665428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.665743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.665756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.666052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.666062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.666379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.666388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.666692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.666702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.667007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.667027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.667343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.667353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.667562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.667573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.667901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.667911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.668205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.668216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.668534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.668543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.668918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.668928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.669207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.669218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.669490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.669500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.669715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.399 [2024-12-07 11:50:33.669725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.399 qpair failed and we were unable to recover it. 00:38:34.399 [2024-12-07 11:50:33.670104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.670114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.670430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.670440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.670651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.670661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.670962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.670972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.671270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.671281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.671590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.671600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.671904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.671914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.672126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.672137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.672407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.672416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.672744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.672754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.673065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.673075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.673327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.673337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.673626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.673639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.673964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.673974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.674324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.674333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.674632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.674642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.674881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.674890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.675064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.675075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.675270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.675280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.675597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.675606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.675896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.675906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.676213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.676223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.676541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.676551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.676900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.676910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.677257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.677269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.677582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.677591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.677884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.677894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.678219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.678229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.678525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.678534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.678839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.678848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.679032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.679043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.679398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.679408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.679739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.679749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.680052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.680062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.680475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.680484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.680789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.680799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.680958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.680969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.681142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.400 [2024-12-07 11:50:33.681153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.400 qpair failed and we were unable to recover it. 00:38:34.400 [2024-12-07 11:50:33.681520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.681530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.681838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.681849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.682143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.682153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.682407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.682417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.682591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.682601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.682953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.682963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.683269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.683280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.683524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.683533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.683830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.683840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.684145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.684155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.684445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.684455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.684763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.684773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.685081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.685096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.685414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.685424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.685805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.685815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.686118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.686128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.686314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.686324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.686629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.686639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.686930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.686940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.687228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.687238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.687550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.687560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.687856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.687865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.688249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.688259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.688605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.688615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.688895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.688910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.689223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.689233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.689514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.689531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.689828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.689837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.690125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.690135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.690405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.690415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.690702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.690712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.691031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.691041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.691361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.691371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.691678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.691687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.691975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.691984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.692276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.692287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.692482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.692491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.401 [2024-12-07 11:50:33.692836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.401 [2024-12-07 11:50:33.692845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.401 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.693143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.693153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.693561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.693571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.693775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.693784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.694054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.694066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.694378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.694388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.694665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.694675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.694981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.694991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.695281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.695291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.695651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.695661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.695939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.695948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.696273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.696283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.696628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.696638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.696964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.696975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.697354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.697365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.697653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.697664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.697970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.697979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.698262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.698273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.698576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.698585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.698744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.698754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.699104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.699115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.699298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.699309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.699634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.699643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.699956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.699967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.700168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.700178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.700372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.700381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.700653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.700663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.700979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.700989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.701233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.701245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.701552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.701562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.701848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.701858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.702163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.702173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.702479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.702488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.702753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.702763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.703056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.703065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.703382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.703391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.703542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.703551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.703835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.703845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.402 [2024-12-07 11:50:33.704185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.402 [2024-12-07 11:50:33.704195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.402 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.704480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.704495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.704789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.704798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.705104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.705115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.705436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.705446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.705740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.705750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.706050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.706061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.706356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.706366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.706741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.706751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.707055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.707064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.707275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.707285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.707482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.707491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.707808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.707818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.708210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.708221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.708521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.708531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.708822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.708832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.709017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.709028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.709335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.709344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.709656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.709666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.709969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.709978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.710309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.710319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.710625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.710635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.710943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.710952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.711264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.711274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.711574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.711583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.403 [2024-12-07 11:50:33.711916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.403 [2024-12-07 11:50:33.711925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.403 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.712396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.712411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.712679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.712691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.713018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.713028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.713355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.713365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.713711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.713722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.714006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.714023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.714324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.714333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.714554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.714563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.714885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.714895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.715126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.715136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.715525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.715534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.715824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.715834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.716169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.716179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.716482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.716492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.716800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.679 [2024-12-07 11:50:33.716809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.679 qpair failed and we were unable to recover it. 00:38:34.679 [2024-12-07 11:50:33.716988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.716997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.717373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.717382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.717788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.717797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.718085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.718095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.718395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.718404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.718700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.718711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.719036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.719047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.719378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.719387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.719680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.719697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.719903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.719912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.720238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.720248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.720576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.720586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.720868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.720878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.721178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.721188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.721486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.721495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.721807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.721817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.722090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.722099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.722428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.722437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.722746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.722755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.723109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.723119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.723310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.723321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.723524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.723534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.723743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.723752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.724070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.724080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.724409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.724424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.724753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.724763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.725112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.725122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.725413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.725428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.725731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.725741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.726024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.726034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.726328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.726338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.726627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.726637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.726951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.726960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.727138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.680 [2024-12-07 11:50:33.727148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.680 qpair failed and we were unable to recover it. 00:38:34.680 [2024-12-07 11:50:33.727425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.727434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.727770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.727780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.728073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.728083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.728289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.728299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.728569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.728584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.728894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.728903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.729188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.729198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.729399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.729410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.729724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.729734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.730041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.730051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.730336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.730345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.730655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.730666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.730974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.730983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.731264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.731273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.731557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.731566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.731830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.731840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.732100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.732109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.732439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.732448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.732612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.732622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.732793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.732803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.733103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.733112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.733503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.733513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.733819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.733828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.734020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.734030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.734382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.734391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.734707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.734717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.735032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.735042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.735333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.735342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.735644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.735653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.735942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.735952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.736271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.736281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.736585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.736594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.736869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.736878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.681 [2024-12-07 11:50:33.737176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.681 [2024-12-07 11:50:33.737186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.681 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.737501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.737510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.737814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.737823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.738133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.738142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.738359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.738368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.738701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.738711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.739018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.739028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.739337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.739347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.739537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.739546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.739846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.739855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.740156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.740166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.740456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.740465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.740658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.740668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.740856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.740866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.741049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.741065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.741369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.741378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.741675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.741690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.742026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.742036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.742195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.742207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.742632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.742641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.742922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.742932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.743237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.743264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.743613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.743623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.743930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.743939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.744233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.744243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.744592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.744601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.744911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.744921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.745253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.745262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.745546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.745555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.745744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.745753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.746063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.746073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.746378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.746387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.682 [2024-12-07 11:50:33.746682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.682 [2024-12-07 11:50:33.746691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.682 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.747000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.747009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.747211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.747220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.747538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.747547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.747866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.747875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.748190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.748200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.748507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.748516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.748685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.748695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.749000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.749009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.749366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.749375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.749694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.749703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.750066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.750075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.750382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.750392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.750582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.750591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.750766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.750776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.751056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.751066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.751107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.751117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.751398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.751408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.751696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.751706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.752009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.752023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.752304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.752313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.752630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.752640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.752967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.752976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.753274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.753284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.753588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.753598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.753908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.753918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.754236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.754248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.754590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.754599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.754906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.754915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.755213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.755222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.755523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.755532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.755840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.755850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.756157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.756166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.756513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.756522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.756721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.683 [2024-12-07 11:50:33.756730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.683 qpair failed and we were unable to recover it. 00:38:34.683 [2024-12-07 11:50:33.757171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.757181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.757510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.757520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.757825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.757834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.758129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.758138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.758464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.758473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.758762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.758772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.759106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.759115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.759446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.759456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.759843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.759852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.760149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.760158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.760473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.760482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.760705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.760714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.761002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.761016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.761310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.761319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.761639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.761648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.761945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.761954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.762254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.762265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.762578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.762588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.762787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.762800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.763193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.763202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.763495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.763505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.763685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.763694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.764026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.764035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.764325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.764334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.764503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.764513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.764747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.684 [2024-12-07 11:50:33.764756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.684 qpair failed and we were unable to recover it. 00:38:34.684 [2024-12-07 11:50:33.765083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.765093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.765388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.765398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.765702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.765711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.766024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.766034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.766337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.766346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.766638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.766648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.766966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.766976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.767290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.767299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.767483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.767493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.767696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.767705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.768018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.768028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.768342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.768351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.768546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.768557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.768873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.768883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.769211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.769220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.769504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.769514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.769860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.769869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.770066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.770076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.770417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.770427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.770752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.770761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.771065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.771074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.771434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.771444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.771748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.771757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.771959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.771968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.772298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.772307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.772599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.772609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.772832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.772841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.773139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.773149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.773484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.773493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.773685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.773694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.774045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.774054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.774425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.774434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.774753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.774762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.775057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.685 [2024-12-07 11:50:33.775066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.685 qpair failed and we were unable to recover it. 00:38:34.685 [2024-12-07 11:50:33.775391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.775400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.775683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.775693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.776000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.776009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.776300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.776310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.776623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.776633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.776940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.776950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.777259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.777269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.777564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.777574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.777874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.777883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.778177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.778187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.778361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.778371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.778548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.778560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.778894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.778903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.779238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.779247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.779545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.779560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.779852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.779861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.780174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.780183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.780547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.780556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.780839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.780848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.781177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.781187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.781490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.781505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.781693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.781703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.781982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.781995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.782110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.782119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.782443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.782452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.782733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.782748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.783101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.783111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.783423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.783432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.783599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.783608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.783919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.783929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.784249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.784259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.784570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.784580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.784889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.784897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.686 [2024-12-07 11:50:33.785172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.686 [2024-12-07 11:50:33.785181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.686 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.785544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.785553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.785841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.785850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.786175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.786184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.786478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.786492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.786869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.786878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.787164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.787174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.787359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.787369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.787688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.787697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.788008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.788020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.788367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.788377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.788684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.788692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.788978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.788988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.789203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.789213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.789553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.789562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.789720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.789730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.789922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.789931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.790219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.790228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.790401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.790412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.790719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.790728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.791000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.791009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.791325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.791334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.791624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.791634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.791910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.791918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.792181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.792190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.792516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.792525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.792821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.792830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.793036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.793045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.793358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.793367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.793556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:34.687 [2024-12-07 11:50:33.793691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.793700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.793996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.794006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.794282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.794294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.794576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.794587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.794770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.794780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.687 [2024-12-07 11:50:33.795094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.687 [2024-12-07 11:50:33.795103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.687 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.795420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.795429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.795758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.795767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.796070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.796081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.796392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.796401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.796674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.796684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.796993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.797002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.797294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.797303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.797606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.797615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.797769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.797778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.798182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.798191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.798378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.798388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.798716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.798725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.799047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.799056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.799383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.799392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.799702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.799712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.800078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.800089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.800428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.800438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.800727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.800737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.800920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.800934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.801126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.801136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.801405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.801415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.801739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.801748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.802064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.802074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.802358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.802369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.802534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.802544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.802850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.802860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.803181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.803192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.803498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.803515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.803829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.803838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.804139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.804149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.804439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.804449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.804739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.804750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.805063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.805072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.688 [2024-12-07 11:50:33.805371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.688 [2024-12-07 11:50:33.805386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.688 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.805746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.805758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.805927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.805937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.806264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.806276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.806560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.806570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.806919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.806930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.807238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.807248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.807566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.807576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.807970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.807979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.808286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.808296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.808459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.808469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.808760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.808770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.808933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.808943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.809303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.809314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.809540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.809550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.809716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.809726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.809954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.809964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.810197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.810207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.810377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.810388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.810597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.810607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.810929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.810939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.811305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.811315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.811618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.811628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.811925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.811935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.812136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.812147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.812489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.812499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.812864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.812875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.813267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.813277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.813558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.813568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.689 [2024-12-07 11:50:33.813779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.689 [2024-12-07 11:50:33.813789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.689 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.814098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.814108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.814423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.814433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.814751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.814761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.815070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.815081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.815375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.815390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.815696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.815705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.815989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.816007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.816192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.816203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.816503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.816513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.816837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.816846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.817016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.817028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.817253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.817263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.817470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.817480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.817656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.817668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.818002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.818017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.818330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.818340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.818648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.818658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.818967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.818977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.819222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.819232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.819543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.819557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.819721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.819732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.819950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.819959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.820152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.820163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.820451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.820461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.820778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.820788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.821080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.821090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.821405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.821415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.821727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.821736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.822125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.822134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.822304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.822313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.822604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.822614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.822943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.822953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.823293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.823304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.823617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.823626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.690 [2024-12-07 11:50:33.823909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.690 [2024-12-07 11:50:33.823925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.690 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.824205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.824215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.824380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.824391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.824598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.824608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.824914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.824923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.825232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.825243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.825550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.825562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.825862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.825872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.826180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.826189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.826492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.826507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.826774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.826783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.826970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.826979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.827310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.827320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.827625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.827634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.827829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.827839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.828158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.828168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.828513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.828523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.828709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.828717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.829097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.829107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.829307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.829317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.829594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.829604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.829878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.829888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.830188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.830198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.830509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.830518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.830827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.830837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.831019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.831030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.831206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.831216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.831556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.831565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.831622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.831632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.831786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.831795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.832004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.832017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.832373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.832383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.832548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.832557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.832806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.832815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.833133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.833143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.691 [2024-12-07 11:50:33.833375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.691 [2024-12-07 11:50:33.833385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.691 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.833768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.833778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.834009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.834022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.834336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.834346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.834550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.834559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.834873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.834883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.835210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.835219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.835523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.835533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.835920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.835929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.836301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.836311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.836615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.836624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.836829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.836840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.837177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.837187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.837360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.837369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.837787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.837801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.837993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.838003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.838219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.838229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.838544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.838554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.838769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.838779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.839121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.839131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.839427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.839437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.839787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.839796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.839969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.839979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.840270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.840280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.840582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.840591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.840962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.840972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.841273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.841283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.841598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.841608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.841779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.841789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.842098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.842110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.842418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.842428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.842725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.842735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.843077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.843087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.843330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.692 [2024-12-07 11:50:33.843339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.692 qpair failed and we were unable to recover it. 00:38:34.692 [2024-12-07 11:50:33.843650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.843660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.843985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.843994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.844296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.844306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.844580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.844589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.844912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.844922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.845240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.845249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.845444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.845453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.845803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.845812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.846134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.846144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.846458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.846468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.846549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.846558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.846739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.846749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.846937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.846946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.847236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.847245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.847433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.847442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.847658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.847667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.847868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.847877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.848194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.848205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.848540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.848549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.848875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.848885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.849240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.849250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.849571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.849581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.849800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.849810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.850028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.850038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.850236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.850245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.850517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.850526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.850831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.850840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.851162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.851171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.851428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.851437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.851613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.851622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.851931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.851940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.852128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.852138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.852274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.852284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.852631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.852641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.693 [2024-12-07 11:50:33.852938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.693 [2024-12-07 11:50:33.852947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.693 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.853170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.853180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.853588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.853597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.853890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.853899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.854100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.854110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.854337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.854347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.854534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.854544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.854884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.854894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.855225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.855234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.855566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.855576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.855896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.855909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.856218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.856228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.856526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.856535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.856769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.856778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.856845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.856855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.857077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.857086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.857474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.857484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.857790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.857800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.858083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.858092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.858407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.858418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.858636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.858645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.858868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.858878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.859002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.859014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.859333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.859345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.859651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.859661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.859848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.859857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.860169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.860180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.860351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.860361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.860526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.860535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.860842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.860851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.694 qpair failed and we were unable to recover it. 00:38:34.694 [2024-12-07 11:50:33.861168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.694 [2024-12-07 11:50:33.861179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.861500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.861509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.861800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.861810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.862167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.862177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.862511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.862520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.862843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.862852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.863165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.863174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.863375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.863384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.863545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.863555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.863840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.863849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.864156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.864166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.864341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.864351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.864746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.864755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.865063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.865072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.865408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.865418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.865731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.865741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.866073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.866082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.866274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.866283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.866565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.866575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.866886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.866895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.867090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.867099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.867441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.867450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.867759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.867768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.867851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.867860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.868175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.868184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.868370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.868380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.868692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.868702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.869027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.869037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.869330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.869340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.869640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.869649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.869832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.869842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.870208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.870219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.870571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.870581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.695 qpair failed and we were unable to recover it. 00:38:34.695 [2024-12-07 11:50:33.870743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.695 [2024-12-07 11:50:33.870755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.871057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.871067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.871376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.871385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.871575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.871585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.871884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.871893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.872093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.872103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.872299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.872308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.872581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.872590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.872799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.872809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.873119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.873129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.873519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.873528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.873807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.873821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.874151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.874160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.874463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.874472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.874811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.874820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.875135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.875145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.875467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.875476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.875823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.875833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.876146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.876155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.876352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.876362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.876535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.876544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.876745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.876754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.877161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.877171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.877441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.877450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.877742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.877752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.878061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.878072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.878247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.878257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.878541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.878551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.878617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.878625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.878834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.878844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.879130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.879140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.879441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.879450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.879765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.879774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.696 [2024-12-07 11:50:33.880103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.696 [2024-12-07 11:50:33.880112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.696 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.880419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.880428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.880748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.880757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.881070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.881080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.881383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.881392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.881705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.881714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.881901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.881910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.882209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.882220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.882511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.882521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.882842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.882852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.883160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.883170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.883465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.883480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.883681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.883691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.883884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.883893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.884003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.884015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.884237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.884246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.884420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.884430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.884708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.884718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.885068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.885078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.885255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.885264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.885500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.885509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.885809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.885818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.886123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.886133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.886330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.886340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.886501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.886511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.886782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.886791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.886972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.886981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.887268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.887278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.887582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.887591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.887889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.887898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.888216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.888227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.888376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.888386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.888707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.888717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.889052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.889062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.697 [2024-12-07 11:50:33.889380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.697 [2024-12-07 11:50:33.889390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.697 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.889700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.889710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.890007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.890026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.890358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.890367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.890653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.890663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.890973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.890982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.891238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.891247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.891650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.891660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.891947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.891961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.892137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.892147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.892513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.892522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.892835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.892844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.893107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.893117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.893343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.893355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.893599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:34.698 [2024-12-07 11:50:33.893638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:34.698 [2024-12-07 11:50:33.893650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:34.698 [2024-12-07 11:50:33.893654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.893664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff0[2024-12-07 11:50:33.893662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:34.698 0 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.893673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:34.698 [2024-12-07 11:50:33.893980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.893990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.894271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.894281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.894601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.894610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.894908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.894919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.895221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.895231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.895534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.895544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.895853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.895863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.895970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:34.698 [2024-12-07 11:50:33.896081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.896091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.896151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:34.698 [2024-12-07 11:50:33.896262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:34.698 [2024-12-07 11:50:33.896285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:34.698 [2024-12-07 11:50:33.896390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.896402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.896726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.896735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.897048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.897058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.897389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.897398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.897689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.897699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.898016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.898026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.898227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.898236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.898440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.898450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.898754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.698 [2024-12-07 11:50:33.898763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.698 qpair failed and we were unable to recover it. 00:38:34.698 [2024-12-07 11:50:33.898974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.898984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.899082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.899093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.899378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.899388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.899725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.899734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.899891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.899900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.899987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.899995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.900155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.900165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.900458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.900468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.900769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.900778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.900934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.900943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.901277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.901287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.901563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.901573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.901768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.901780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.901950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.901960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.902337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.902347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.902641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.902651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.902841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.902850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.903177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.903188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.903518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.903527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.903825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.903835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.904157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.904167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.904487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.904496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.904857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.904867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.905154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.905164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.905361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.905370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.905532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.905542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.905623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.905632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.905992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.906001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.906402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.906413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.906725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.906734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.906908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.906917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.907121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.907131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.699 qpair failed and we were unable to recover it. 00:38:34.699 [2024-12-07 11:50:33.907478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.699 [2024-12-07 11:50:33.907487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.907771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.907781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.908094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.908104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.908420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.908429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.908731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.908740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.908931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.908940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.909276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.909286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.909628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.909638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.909810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.909824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.909991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.910000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.910431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.910440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.910750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.910760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.910963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.910973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.911366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.911376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.911545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.911555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.911751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.911761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.911844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.911853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.912119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.912128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.912309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.912318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.912613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.912623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.912982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.912992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.913058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.913068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.913259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.913283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.913598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.913608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.913932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.913941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.914236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.914246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.914538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.914548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.914861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.914870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.915180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.915190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.915499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.915508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.915700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.915709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.700 qpair failed and we were unable to recover it. 00:38:34.700 [2024-12-07 11:50:33.915893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.700 [2024-12-07 11:50:33.915903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.916224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.916234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.916534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.916544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.916708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.916718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.917037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.917047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.917256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.917265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.917426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.917436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.917652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.917661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.917970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.917979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.918167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.918177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.918390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.918399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.918640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.918650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.918856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.918865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.919065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.919075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.919270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.919279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.919652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.919661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.919836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.919845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.920057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.920067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.920334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.920344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.920547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.920557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.920928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.920937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.921257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.921267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.921564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.921574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.921761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.921770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.922135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.922145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.922466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.922476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.922849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.922859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.923170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.923180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.923360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.923370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.923635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.923644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.923835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.923844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.924161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.924170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.924507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.924516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.924809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.924818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.925009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.701 [2024-12-07 11:50:33.925021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.701 qpair failed and we were unable to recover it. 00:38:34.701 [2024-12-07 11:50:33.925179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.925191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.925505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.925515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.925678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.925687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.925985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.925994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.926317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.926327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.926522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.926533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.926702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.926715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.927021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.927031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.927301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.927310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.927631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.927640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.927951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.927961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.928276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.928286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.928572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.928581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.928896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.928905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.929229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.929240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.929430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.929440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.929750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.929759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.929951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.929961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.930003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.930019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.930338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.930348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.930662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.930671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.930985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.930994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.931302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.931311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.931515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.931524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.931872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.931881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.932196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.932205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.932405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.932415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.932714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.932724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.933050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.933060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.933334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.933344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.933550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.933560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.933743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.933752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.934061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.934071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.934342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.934352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.934521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.934530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.702 [2024-12-07 11:50:33.934852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.702 [2024-12-07 11:50:33.934861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.702 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.935188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.935202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.935375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.935384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.935573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.935583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.935747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.935756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.935925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.935938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.936155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.936164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.936388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.936398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.936691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.936700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.937031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.937040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.937377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.937386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.937564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.937573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.937848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.937858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.938205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.938214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.938547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.938557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.938864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.938874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.939081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.939091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.939300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.939309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.939638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.939648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.939812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.939822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.940154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.940164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.940367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.940376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.940547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.940555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.940873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.940883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.941183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.941193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.941478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.941488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.941785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.941794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.942120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.942129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.942316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.942326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.942659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.942668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.942981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.942991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.943303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.943312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.943489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.943499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.943693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.703 [2024-12-07 11:50:33.943703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.703 qpair failed and we were unable to recover it. 00:38:34.703 [2024-12-07 11:50:33.943887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.943897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.944254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.944268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.944583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.944592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.944887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.944897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.945188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.945198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.945513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.945523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.945833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.945843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.946168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.946178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.946521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.946530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.946884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.946894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.947084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.947095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.947277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.947289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.947603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.947613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.947929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.947938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.948124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.948134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.948319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.948329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.948609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.948619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.948961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.948971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.949283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.949294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.949468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.949478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.949819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.949829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.949879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.949888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.950202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.950213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.950533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.950542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.950864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.950874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.951177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.951187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.951233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.951242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.951562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.951572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.951924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.951934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.952082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.952093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.952382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.952392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.952714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.952724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.953039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.953050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.953280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.704 [2024-12-07 11:50:33.953290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.704 qpair failed and we were unable to recover it. 00:38:34.704 [2024-12-07 11:50:33.953634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.953645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.953974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.953983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.954276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.954286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.954453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.954463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.954770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.954780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.955075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.955085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.955310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.955321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.955544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.955553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.955880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.955890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.956195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.956205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.956497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.956508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.956816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.956826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.957027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.957037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.957260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.957270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.957543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.957552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.957838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.957848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.958150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.958160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.958333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.958344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.958771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.958781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.958949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.958959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.959273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.959283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.959572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.959582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.959873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.959883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.960175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.960185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.960483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.960494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.960686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.960695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.961014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.961024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.961344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.961354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.705 qpair failed and we were unable to recover it. 00:38:34.705 [2024-12-07 11:50:33.961633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.705 [2024-12-07 11:50:33.961643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.961949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.961960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.962161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.962173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.962551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.962567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.962874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.962885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.963088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.963098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.963496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.963505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.963801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.963811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.964027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.964037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.964427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.964437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.964628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.964638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.964840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.964850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.965041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.965051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.965360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.965370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.965686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.965695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.965999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.966009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.966191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.966200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.966542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.966551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.966752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.966761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.967157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.967167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.967486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.967496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.967796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.967805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.968121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.968131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.968457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.968467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.968789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.968799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.969181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.969191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.969510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.969519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.969809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.969818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.970047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.970057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.970338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.970351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.970654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.970664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.970968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.970977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.971388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.971399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.706 qpair failed and we were unable to recover it. 00:38:34.706 [2024-12-07 11:50:33.971707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.706 [2024-12-07 11:50:33.971717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.971880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.971889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.971939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.971949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.972000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.972009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.972173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.972184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.972529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.972538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.972851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.972861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.973148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.973158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.973367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.973376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.973711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.973720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.973936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.973945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.974020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.974030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.974222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.974231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.974563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.974573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.974867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.974876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.975202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.975212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.975377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.975386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.975582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.975592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.975898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.975907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.976084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.976094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.976430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.976439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.976750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.976759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.977174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.977185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.977398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.977408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.977715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.977724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.978016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.978026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.978367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.978377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.978672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.978681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.978870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.978879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.979095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.979105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.979317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.979327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.979678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.979687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.980003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.980025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.980391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.980401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.707 [2024-12-07 11:50:33.980719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.707 [2024-12-07 11:50:33.980733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.707 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.981022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.981032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.981228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.981239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.981553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.981563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.981928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.981937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.982232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.982242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.982406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.982416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.982464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.982474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.982798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.982807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.982980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.982990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.983302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.983312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.983628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.983638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.983949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.983958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.984269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.984279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.984575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.984585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.984746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.984756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.984916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.984926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.985283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.985292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.985341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.985350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.985648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.985657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.985968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.985978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.986166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.986176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.986469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.986478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.986776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.986785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.986982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.986992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.987355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.987365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.987651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.987660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.987825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.987835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.988219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.988229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.988535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.988545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.988732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.988742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.989056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.989066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.989384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.708 [2024-12-07 11:50:33.989394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.708 qpair failed and we were unable to recover it. 00:38:34.708 [2024-12-07 11:50:33.989707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.989716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.989906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.989915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.990204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.990214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.990530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.990539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.990694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.990704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.991085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.991095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.991268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.991277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.991601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.991610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.991893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.991903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.992228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.992239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.992551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.992560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.992880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.992890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.993177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.993187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.993483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.993493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.993686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.993695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.993852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.993862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.994266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.994276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.994580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.994589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.994777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.994787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.994981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.994990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.995280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.995290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.995665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.995674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.995956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.995973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.996304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.996315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.996369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.996378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.996683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.996692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.996998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.997008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.997203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.997212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.997389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.997399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.997712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.997721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.998051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.998061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.998359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.998375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.998553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.998562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.709 [2024-12-07 11:50:33.998885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.709 [2024-12-07 11:50:33.998895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.709 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:33.999196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:33.999206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:33.999367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:33.999376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:33.999746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:33.999755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.000061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.000071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.000406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.000415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.000584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.000593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.000952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.000962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.001123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.001133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.001303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.001313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.001508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.001518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.001804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.001815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.002130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.002140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.002320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.002330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.002606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.002617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.002923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.002932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.003225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.003237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.003568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.003577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.003863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.003874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.004180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.004190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.004497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.004507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.004816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.004826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.005017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.005027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.005347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.005356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.005399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.005407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.005754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.005763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.006078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.006088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.006394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.006404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.006718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.006727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.006773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.710 [2024-12-07 11:50:34.006781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.710 qpair failed and we were unable to recover it. 00:38:34.710 [2024-12-07 11:50:34.007075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.007085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.007258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.007267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.007579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.007589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.007907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.007916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.007962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.007971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.008015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.008025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.008318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.008327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.008639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.008648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.008834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.008843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.009029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.009039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.009339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.009348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.009687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.009696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.009991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.010000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.010297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.010307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.010616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.010625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.010937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.010947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.011200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.011210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.011387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.011404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.011760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.011769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.012069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.012079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.012408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.012417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.012730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.012740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.012927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.012937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.013097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.013107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.013401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.013410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.013681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.013690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.013871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.013882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.014182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.014191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.014393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.014402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.014713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.711 [2024-12-07 11:50:34.014722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.711 qpair failed and we were unable to recover it. 00:38:34.711 [2024-12-07 11:50:34.015035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.015049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.015362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.015374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.015660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.015674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.015827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.015837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.016195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.016205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.016416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.016426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.016598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.016607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.016906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.016915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.017100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.017111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.017386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.017395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.017576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.017586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.017928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.017938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.018242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.018252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.018419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.018429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.018586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.018595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.018863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.018873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.019074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.019084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.019129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.019138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.989 [2024-12-07 11:50:34.019423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.989 [2024-12-07 11:50:34.019432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.989 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.019762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.019772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.020053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.020063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.020272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.020281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.020437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.020446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.020758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.020767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.021038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.021047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.021337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.021347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.021525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.021536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.021716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.021725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.021956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.021966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.022240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.022249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.022572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.022581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.022900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.022909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.023232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.023242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.023449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.023458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.023774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.023784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.024117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.024127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.024297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.024308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.024518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.024528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.024702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.024713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.025020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.025029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.025338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.025347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.025659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.025668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.025977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.025987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.026295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.026305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.026494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.026503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.026863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.026873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.027166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.027176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.990 [2024-12-07 11:50:34.027547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.990 [2024-12-07 11:50:34.027557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.990 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.027851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.027860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.028170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.028180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.028498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.028508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.028695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.028704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.028995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.029005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.029196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.029205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.029553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.029562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.029889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.029899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.030188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.030198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.030513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.030522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.030833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.030843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.031142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.031152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.031383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.031392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.031576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.031586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.031768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.031777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.032025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:38:34.991 [2024-12-07 11:50:34.032793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.032845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.033356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.033401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.033734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.033752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.034214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.034261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.034496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.034514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.034841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.034856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.035291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.035337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.035565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.035582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.035919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.035933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.036050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.036065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.036441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.036456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.036783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.036798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.991 [2024-12-07 11:50:34.037090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.991 [2024-12-07 11:50:34.037107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.991 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.037398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.037412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.037728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.037742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.037939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.037953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.038180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.038196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.038402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.038416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.038622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.038636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.038920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.038934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.039119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.039135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.039321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.039336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.039677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.039691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.039876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.039889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.040189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.040203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.040394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.040407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.040611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.040626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.041017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.041031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.041215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.041228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.041571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.041585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.041915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.041929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.042117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.042131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.042473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.042487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.042781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.042795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.042997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.043014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.043226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.043240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.043630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.043643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.043815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.043828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.044163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.044177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.044500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.044514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.044823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.044836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.992 [2024-12-07 11:50:34.045035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.992 [2024-12-07 11:50:34.045049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.992 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.045414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.045428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.045621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.045635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.045988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.046002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.046351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.046365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.046564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.046578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.046740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.046755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.047058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.047072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.047279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.047292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.047628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.047642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.047980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.047994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.048204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.048218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.048550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.048564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.048853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.048868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.049064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.049078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.049254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.049267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.049586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.049600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.050026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.050040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.050373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.050386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.050727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.050740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.050929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.050942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.051252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.051267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.051593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.051607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.051676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.051691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.051899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.051914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.052099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.052116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.052418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.052432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.052734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.052748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.053059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.053073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.053265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.053280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.053566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.053579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.053885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.053898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.054072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.054086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.054487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.054521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.054846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.054859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.055237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.993 [2024-12-07 11:50:34.055272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.993 qpair failed and we were unable to recover it. 00:38:34.993 [2024-12-07 11:50:34.055345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.055364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.055533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.055543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.055723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.055733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.056056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.056067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.056232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.056241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.056422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.056432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.056845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.056854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.057031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.057042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.057292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.057302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.057498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.057508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.057819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.057829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.058181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.058191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.058353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.058363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.058689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.058699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.059021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.059032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.059336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.059345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.059661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.059671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.059978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.059987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.060301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.060312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.060615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.060625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.060824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.060835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.061109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.061119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.061300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.061309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.061528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.061537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.061761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.061771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.062111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.062121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.062476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.062485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.062784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.062794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.063105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.063115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.063287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.063299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.063581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.063590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.063786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.994 [2024-12-07 11:50:34.063795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.994 qpair failed and we were unable to recover it. 00:38:34.994 [2024-12-07 11:50:34.064166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.064176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.064360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.064369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.064732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.064741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.065111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.065121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.065169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.065178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.065414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.065423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.065750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.065759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.066094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.066104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.066321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.066331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.066568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.066578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.066923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.066933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.067315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.067325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.067521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.067531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.067708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.067718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.068033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.068042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.068362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.068371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.068524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.068533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.068587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.068596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.068724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.068732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.069016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.069026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.069396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.069405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.069737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.069747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.069941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.069951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.070170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.070179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.070467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.070476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.070792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.070802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.070991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.071001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.071260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.071271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.071602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.071612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.071924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.071934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.072127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.072138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.072353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.072366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.072692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.072706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.073045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.073054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.073355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.073364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.073666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.073675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.073867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.073877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.074035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.995 [2024-12-07 11:50:34.074046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.995 qpair failed and we were unable to recover it. 00:38:34.995 [2024-12-07 11:50:34.074212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.074222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.074631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.074640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.074888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.074897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.075089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.075099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.075278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.075287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.075511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.075520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.075697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.075707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.075898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.075908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.076070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.076081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.076391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.076400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.076572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.076581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.076912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.076921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.077086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.077096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.077393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.077404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.077709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.077719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.077886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.077895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.078192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.078202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.078401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.078410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.078729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.078738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.078912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.078921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.079182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.079193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.079419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.079429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.079469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.079478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.079789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.079798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.080081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.080091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.080420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.080429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.080600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.080609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.080894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.080904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.081236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.081245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.081415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.081424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.081763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.081774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.081968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.081978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.082373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.082383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.082739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.082749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.083158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.083167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.083354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.083363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.083720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.083729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.083931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.996 [2024-12-07 11:50:34.083940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.996 qpair failed and we were unable to recover it. 00:38:34.996 [2024-12-07 11:50:34.084100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.084110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.084437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.084448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.084669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.084679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.084949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.084958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.085219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.085229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.085554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.085563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.085906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.085915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.086261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.086271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.086437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.086447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.086754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.086764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.087103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.087113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.087286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.087296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.087489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.087498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.087820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.087829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.088151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.088161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.088478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.088488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.088609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.088618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.088796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.088806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.089226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.089236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.089565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.089574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.089771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.089786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.090066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.090075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.090395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.090405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.090720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.090729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.090901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.090911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.091062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.091072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.091384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.091393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.091558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.091568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.091639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.091649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.091841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.091850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.092031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.092043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.092413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.092422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.092623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.092632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.092933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.092942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.093313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.093322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.093696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.093705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.093856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.093866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.094099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.997 [2024-12-07 11:50:34.094109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.997 qpair failed and we were unable to recover it. 00:38:34.997 [2024-12-07 11:50:34.094320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.094329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.094700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.094709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.094905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.094915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.095128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.095140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.095467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.095476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.095859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.095869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.096095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.096105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.096306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.096315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.096367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.096376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.096583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.096593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.096925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.096934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.097128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.097138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.097376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.097386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.097725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.097736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.098075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.098086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.098275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.098284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.098460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.098469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.098809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.098818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.099002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.099014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.099317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.099326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.099660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.099670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.099975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.099985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.100333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.100343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.100540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.100550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.100925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.100935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.101143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.101152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.101470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.101480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.101662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.101671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.101978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.101988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.102375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.102385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.102551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.102561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.102770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.102780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.103109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.103119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.103439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.103449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.103624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.103633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.103930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.103939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.104256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.104265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.998 [2024-12-07 11:50:34.104454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.998 [2024-12-07 11:50:34.104464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.998 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.104681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.104690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.104873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.104883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.105020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.105029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.105085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.105095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.105381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.105391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.105554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.105567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.105913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.105923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.106225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.106235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.106544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.106557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.106859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.106869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.106952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.106961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.107052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.107062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.107316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.107325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.107646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.107655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.107973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.107983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.108294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.108303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.108603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.108613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.108666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.108675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.108861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.108871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.109080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.109090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.109140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.109151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.109318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.109328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.109622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.109631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.109788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.109798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.110053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.110062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.110396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.110407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.110577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.110587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.110784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.110793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.111027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.111037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.111420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.111429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.111586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.111595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.111877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.111887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.112114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.112124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.112293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.999 [2024-12-07 11:50:34.112302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:34.999 qpair failed and we were unable to recover it. 00:38:34.999 [2024-12-07 11:50:34.112622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.112631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.112984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.112993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.113210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.113220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.113612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.113622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.113815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.113824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.113878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.113886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.114033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.114043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.114181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.114190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.114364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.114373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.114553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.114563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.114879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.114889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.115190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.115201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.115531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.115540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.115734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.115743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.115962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.115971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.116303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.116314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.116656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.116666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.116929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.116939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.117110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.117121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.117429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.117439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.117625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.117634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.117960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.117970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.118168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.118177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.118366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.118375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.118658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.118668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.118847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.118858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.118913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.118922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.119227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.119237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.119451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.119461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.119674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.119685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.119815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.119825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.120013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.120024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.120209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.120219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.120444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.120454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.120771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.120781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.120955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.120965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.121119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.121130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.121432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.000 [2024-12-07 11:50:34.121442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.000 qpair failed and we were unable to recover it. 00:38:35.000 [2024-12-07 11:50:34.121612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.121625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.121827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.121836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.122007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.122023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.122312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.122321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.122509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.122518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.122873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.122882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.123079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.123089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.123417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.123425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.123715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.123725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.123894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.123903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.124188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.124197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.124521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.124530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.124720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.124729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.124924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.124935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.125113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.125123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.125287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.125296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.125547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.125556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.125907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.125916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.126244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.126253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.126547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.126556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.126605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.126615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.126881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.126890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.127195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.127205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.127467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.127476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.127693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.127703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.128005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.128018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.128314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.128323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.128497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.128507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.128699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.128709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.128897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.128907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.129072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.129082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.129359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.129368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.129625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.129634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.129942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.129952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.130262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.130272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.130568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.130577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.130875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.130884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.131211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.001 [2024-12-07 11:50:34.131221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.001 qpair failed and we were unable to recover it. 00:38:35.001 [2024-12-07 11:50:34.131541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.131551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.131860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.131869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.132165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.132176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.132344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.132353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.132639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.132648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.132845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.132854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.133039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.133048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.133316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.133327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.133499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.133508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.133786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.133795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.133986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.133995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.134041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.134063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.134221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.134230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.134491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.134501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.134805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.134815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.135130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.135140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.135303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.135313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.135635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.135645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.135920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.135930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.136226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.136236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.136391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.136402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.136721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.136731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.137059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.137068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.137234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.137243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.137524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.137533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.137859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.137870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.138157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.138171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.138523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.138533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.138845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.138854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.139023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.139032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.139281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.139290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.139551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.139561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.139818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.139829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.140146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.140156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.140335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.140345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.140690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.140699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.140859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.140868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.141050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.002 [2024-12-07 11:50:34.141060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.002 qpair failed and we were unable to recover it. 00:38:35.002 [2024-12-07 11:50:34.141244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.141256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.141526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.141536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.141850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.141859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.142164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.142174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.142491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.142504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.142666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.142675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.142991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.143001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.143298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.143308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.143608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.143618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.143916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.143925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.144179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.144188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.144495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.144505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.144797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.144807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.145000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.145009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.145206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.145216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.145564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.145574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.145742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.145752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.145995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.146005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.146333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.146343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.146657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.146667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.146979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.146988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.147178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.147188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.147541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.147550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.147892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.147901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.148203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.148212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.148538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.148547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.148850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.148859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.149171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.149181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.149360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.149369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.149584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.149594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.149920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.149929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.150127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.150137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.150331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.150341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.150687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.150696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.003 qpair failed and we were unable to recover it. 00:38:35.003 [2024-12-07 11:50:34.151018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.003 [2024-12-07 11:50:34.151028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.151338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.151347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.151647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.151657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.151966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.151975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.152292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.152302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.152473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.152482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.152755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.152764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.153079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.153089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.153389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.153398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.153708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.153717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.154008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.154028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.154339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.154349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.154540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.154550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.154601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.154610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.154913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.154923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.155225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.155235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.155399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.155408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.155666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.155676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.156006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.156021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.156337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.156346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.156695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.156705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.157013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.157024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.157307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.157316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.157555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.157564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.157905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.157915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.158242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.158252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.158617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.158627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.158797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.158806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.159024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.159034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.159349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.159359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.159545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.159555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.159873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.159882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.160061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.160071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.160355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.160364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.160563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.160573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.160740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.160750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.160920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.160930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.161096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.004 [2024-12-07 11:50:34.161107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.004 qpair failed and we were unable to recover it. 00:38:35.004 [2024-12-07 11:50:34.161421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.161430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.161823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.161833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.162018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.162028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.162405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.162415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.162730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.162739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.162929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.162938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.163270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.163280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.163436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.163446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.163766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.163775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.163958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.163968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.164130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.164141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.164435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.164445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.164754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.164765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.164927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.164936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.165121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.165131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.165341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.165350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.165539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.165548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.165732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.165742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.165907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.165917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.165991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.166000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.166317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.166327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.166652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.166662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.166967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.166977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.167289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.167300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.167460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.167470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.167654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.167665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.167965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.167975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.168166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.168177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.168222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.168231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.168412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.168423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.168607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.168617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.168940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.168950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.169265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.169276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.169551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.169560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.169810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.169821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.170210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.170220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.170508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.170518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.170558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.005 [2024-12-07 11:50:34.170567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.005 qpair failed and we were unable to recover it. 00:38:35.005 [2024-12-07 11:50:34.170880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.170890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.171223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.171234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.171419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.171429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.171825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.171834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.172124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.172133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.172338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.172348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.172658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.172671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.172954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.172970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.173302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.173312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.173473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.173482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.173772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.173781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.174083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.174093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.174459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.174468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.174634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.174643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.174959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.174971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.175162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.175173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.175452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.175461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.175785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.175797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.176089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.176099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.176430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.176439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.176778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.176787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.176951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.176961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.177276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.177286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.177580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.177590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.177893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.177903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.178121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.178131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.178448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.178458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.178625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.178634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.178823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.178833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.179180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.179190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.179360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.179369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.179721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.179730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.180027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.180036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.180228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.180239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.180395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.180404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.180566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.180575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.180754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.180763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.181139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.006 [2024-12-07 11:50:34.181149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.006 qpair failed and we were unable to recover it. 00:38:35.006 [2024-12-07 11:50:34.181335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.181344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.181629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.181638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.181953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.181962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.182288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.182298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.182491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.182502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.182772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.182782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.182936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.182946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.183259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.183269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.183589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.183598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.183966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.183976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.184176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.184186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.184531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.184541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.184720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.184729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.184915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.184924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.185153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.185162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.185324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.185333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.185696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.185707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.186023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.186032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.186251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.186260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.186462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.186471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.186781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.186791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.186950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.186960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.187154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.187163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.187413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.187422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.187577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.187587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.187907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.187917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.188222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.188232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.188544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.188553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.188741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.188751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.189065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.189074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.189381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.189391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.189707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.189716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.190002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.190026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.190215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.190225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.190550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.190559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.190895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.190904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.191225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.191235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.191586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.007 [2024-12-07 11:50:34.191595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.007 qpair failed and we were unable to recover it. 00:38:35.007 [2024-12-07 11:50:34.191919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.191928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.192084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.192093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.192255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.192265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.192594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.192603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.192908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.192918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.193253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.193264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.193614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.193624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.193928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.193938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.194211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.194221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.194396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.194406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.194660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.194669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.194982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.194991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.195168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.195177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.195469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.195478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.195741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.195751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.196053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.196063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.196276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.196285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.196469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.196479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.196774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.196785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.196971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.196981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.197160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.197170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.197463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.197473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.197827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.197837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.198169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.198179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.198497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.198507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.198694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.198704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.199013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.199023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.199416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.199425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.199651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.199661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.199996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.200005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.200329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.200339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.200495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.200505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.200664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.200673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.200934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.200943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.008 [2024-12-07 11:50:34.201121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.008 [2024-12-07 11:50:34.201131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.008 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.201324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.201334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.201508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.201517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.201811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.201821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.202139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.202149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.202455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.202465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.202766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.202775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.202944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.202953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.203266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.203275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.203497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.203506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.203685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.203695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.204028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.204038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.204214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.204223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.204437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.204447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.204493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.204502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.204799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.204808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.205156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.205165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.205501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.205510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.205693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.205703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.205885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.205895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.206168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.206177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.206555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.206564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.206868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.206877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.207075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.207084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.207385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.207400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.207720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.207729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.208025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.208034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.208346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.208355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.208541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.208551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.208861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.208871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.209204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.209214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.209537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.209547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.209852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.209862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.210167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.210177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.210520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.210529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.210842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.210854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.211050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.211060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.211364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.211373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.009 [2024-12-07 11:50:34.211566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.009 [2024-12-07 11:50:34.211575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.009 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.211923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.211932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.212147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.212157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.212332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.212341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.212697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.212707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.213017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.213026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.213331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.213340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.213650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.213660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.213970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.213980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.214297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.214307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.214474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.214484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.214825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.214835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.215147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.215156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.215476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.215486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.215798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.215807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.215854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.215863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.216124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.216134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.216318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.216329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.216649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.216659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.216950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.216966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.217302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.217313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.217619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.217628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.217939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.217948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.218259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.218269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.218450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.218460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.218780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.218790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.219092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.219104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.219281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.219291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.219523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.219533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.219859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.219868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.220175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.220185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.220510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.220519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.220835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.220844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.221051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.221061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.221349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.221358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.221658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.221667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.221996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.222005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.222173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.222183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.222548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.010 [2024-12-07 11:50:34.222558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.010 qpair failed and we were unable to recover it. 00:38:35.010 [2024-12-07 11:50:34.222889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.222899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.223220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.223230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.223521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.223531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.223710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.223719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.224046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.224056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.224372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.224382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.224585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.224594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.224819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.224835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.225144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.225153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.225495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.225505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.225698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.225707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.225867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.225880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.226179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.226190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.226410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.226419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.226794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.226803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.226986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.226996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.227371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.227381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.227692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.227702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.228014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.228025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.228110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.228121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.228418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.228428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.228717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.228727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.228919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.228930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.229135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.229144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.229463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.229473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.229693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.229702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.229893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.229902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.230227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.230238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.230571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.230581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.230889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.230898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.231194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.231204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.231524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.231533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.231721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.231730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.232082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.232092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.232497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.232507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.232836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.232845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.233178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.233188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.011 qpair failed and we were unable to recover it. 00:38:35.011 [2024-12-07 11:50:34.233499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.011 [2024-12-07 11:50:34.233508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.233834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.233844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.234163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.234173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.234577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.234586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.234900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.234910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.235099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.235109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.235476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.235486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.235776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.235787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.236083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.236093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.236281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.236290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.236532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.236542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.236908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.236918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.237079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.237089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.237384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.237394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.237742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.237751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.238063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.238080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.238399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.238408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.238702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.238712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.238884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.238894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.239225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.239235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.239403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.239413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.239582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.239591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.239906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.239915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.240288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.240297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.240612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.240622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.240796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.240806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.241117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.241127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.241298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.241307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.241679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.241688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.241864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.241874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.241915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.241925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.242240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.242250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.242563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.242573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.242866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.242876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.243268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.243278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.243562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.243573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.243737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.012 [2024-12-07 11:50:34.243747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.012 qpair failed and we were unable to recover it. 00:38:35.012 [2024-12-07 11:50:34.244068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.244078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.244382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.244395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.244615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.244625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.244816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.244825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.244872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.244880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.245212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.245222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.245531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.245540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.245870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.245880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.246097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.246107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.246448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.246457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.246767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.246776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.247126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.247136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.247336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.247345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.247553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.247563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.247690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.247701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.248026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.248036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.248320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.248330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.248665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.248674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.248847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.248856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.249144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.249153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.249464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.249473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.249744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.249754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.249955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.249964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.250245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.250254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.250551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.250561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.250862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.250872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.251097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.251106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.251424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.251433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.251739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.251748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.252043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.252052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.252238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.252248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.252607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.252618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.253018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.253028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.253316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.253327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.253553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.253563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.013 [2024-12-07 11:50:34.253758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.013 [2024-12-07 11:50:34.253767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.013 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.253951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.253961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.254346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.254356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.254529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.254539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.254752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.254761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.255060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.255070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.255350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.255359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.255512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.255521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.255844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.255854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.256183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.256193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.256497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.256506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.256799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.256809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.257125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.257134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.257314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.257324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.257625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.257634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.257959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.257968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.258356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.258366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.258701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.258712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.259031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.259042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.259229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.259238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.259425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.259434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.259646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.259656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.259962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.259971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.260161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.260170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.260405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.260415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.260718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.260727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.261038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.261048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.261355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.261365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.261582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.261591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.261989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.261999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.262173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.262186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.262470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.262479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.262822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.262831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.263147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.263157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.263373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.263382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.263811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.263821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.264126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.264143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.264475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.264486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.014 qpair failed and we were unable to recover it. 00:38:35.014 [2024-12-07 11:50:34.264671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.014 [2024-12-07 11:50:34.264683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.264865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.264874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.265075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.265084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.265297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.265306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.265610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.265620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.265836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.265845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.266159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.266168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.266469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.266480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.266783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.266793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.266898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.266908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.267045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.267054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.267385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.267394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.267704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.267715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.267891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.267901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.268207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.268218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.268429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.268439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.268672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.268682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.269001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.269014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.269328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.269337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.269514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.269523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.269850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.269860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.270153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.270164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.270488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.270498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.270690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.270700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.271005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.271018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.271086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.271096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.271381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.271390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.271721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.271732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.272065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.272075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.272400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.272410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.272724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.272733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.272934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.272944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.273303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.273312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.273622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.273631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.273839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.273849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.274188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.274198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.274586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.274595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.274869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.274879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.015 qpair failed and we were unable to recover it. 00:38:35.015 [2024-12-07 11:50:34.275269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.015 [2024-12-07 11:50:34.275279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.275568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.275578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.275889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.275899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.276113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.276123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.276505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.276514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.276563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.276572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.276723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.276732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.277024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.277035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.277199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.277209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.277527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.277537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.277726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.277736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.277911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.277921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.278086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.278103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.278145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.278154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.278483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.278492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.278674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.278684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.279019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.279029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.279334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.279343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.279639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.279653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.279950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.279959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.280147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.280157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.280370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.280379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.280547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.280557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.280764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.280774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.280967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.280976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.281168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.281178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.281487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.281497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.281677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.281687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.281993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.282014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.282059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.282070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.282358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.282368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.282716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.282726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.283007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.283020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.283140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.283150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.283476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.283486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.283643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.283652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.283970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.283979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.284285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.284294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.016 qpair failed and we were unable to recover it. 00:38:35.016 [2024-12-07 11:50:34.284506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.016 [2024-12-07 11:50:34.284515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.284799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.284808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.284984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.284994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.285284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.285294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.285651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.285661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.285851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.285861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.286215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.286225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.286537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.286546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.286844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.286853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.287018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.287028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.287302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.287311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.287724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.287735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.288031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.288040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.288239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.288248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.288524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.288534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.288845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.288854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.289182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.289192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.289377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.289387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.289670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.289679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.290019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.290030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.290232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.290241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.290560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.290570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.290904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.290915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.291204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.291215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.291541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.291551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.291897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.291907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.292271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.292280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.292445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.292454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.292683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.292693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.293027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.293038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.293340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.293349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.293642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.293654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.293992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.294001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.294398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.294407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.017 [2024-12-07 11:50:34.294740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.017 [2024-12-07 11:50:34.294751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.017 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.295123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.295132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.295432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.295442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.295627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.295637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.295921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.295930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.296116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.296125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.296312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.296321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.296655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.296664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.296990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.296999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.297296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.297306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.297629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.297655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.297980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.297991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.298339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.298349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.298397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.298406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.298593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.298602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.298650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.298660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.298965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.298975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.299170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.299179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.299458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.299467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.299515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.299524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.299813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.299822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.300058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.300069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.300391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.300400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.300724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.300734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.301050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.301060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.301142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.301151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.301317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.301327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.301542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.301552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.301872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.301882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.302178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.302188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.302527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.302538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.302852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.302861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.303024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.303033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.303303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.303312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.303674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.303683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.303858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.303868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.304121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.304131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.304447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.304458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.018 [2024-12-07 11:50:34.304629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.018 [2024-12-07 11:50:34.304639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.018 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.304967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.304977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.305272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.305281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.305599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.305608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.305790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.305800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.306068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.306078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.306373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.306382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.306717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.306727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.306942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.306952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.307123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.307133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.307355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.307365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.307683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.307693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.307981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.307990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.308297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.308307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.308603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.308613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.308931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.308940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.309335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.309345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.309641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.309651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.309989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.309999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.310306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.310324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.310643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.310652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.310850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.310859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.311159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.311169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.311367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.311375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.311680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.311690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.311862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.311872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.312049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.312059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.312137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.312146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.312453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.312462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.312678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.312689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.313004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.313018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.313317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.313327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.313507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.313517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.313869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.313878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.314185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.314195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.314382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.314392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.314689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.314702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.314902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.314911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.019 [2024-12-07 11:50:34.315082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.019 [2024-12-07 11:50:34.315092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.019 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.315385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.315396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.315561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.315571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.315632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.315642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.315951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.315961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.316145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.316155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.316457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.316466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.316653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.316662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.316849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.316858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.317085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.317095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.317292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.317302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.317357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.317367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.317432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.317441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.317776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.317786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.318104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.318115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.318298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.318309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.318594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.318605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.318767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.318777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.319087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.319097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.319283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.319293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.319627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.319636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.319986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.319997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.320177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.320187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.320466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.320476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.320658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.320668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.320926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.320935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.321302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.321312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.321597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.321607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.321684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.321694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.321793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.321802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.322086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.322097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.322305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.322314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.322362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.322370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.322453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.322462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.322557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.322567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.322879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.322889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.323195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.323205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.323390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.323400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.020 [2024-12-07 11:50:34.323713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.020 [2024-12-07 11:50:34.323723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.020 qpair failed and we were unable to recover it. 00:38:35.021 [2024-12-07 11:50:34.323947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.021 [2024-12-07 11:50:34.323958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.021 qpair failed and we were unable to recover it. 00:38:35.291 [2024-12-07 11:50:34.324324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.291 [2024-12-07 11:50:34.324336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.291 qpair failed and we were unable to recover it. 00:38:35.291 [2024-12-07 11:50:34.324525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.291 [2024-12-07 11:50:34.324538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.291 qpair failed and we were unable to recover it. 00:38:35.291 [2024-12-07 11:50:34.324873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.291 [2024-12-07 11:50:34.324883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.291 qpair failed and we were unable to recover it. 00:38:35.291 [2024-12-07 11:50:34.325189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.291 [2024-12-07 11:50:34.325199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.291 qpair failed and we were unable to recover it. 00:38:35.291 [2024-12-07 11:50:34.325509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.291 [2024-12-07 11:50:34.325519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.291 qpair failed and we were unable to recover it. 00:38:35.291 [2024-12-07 11:50:34.325754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.291 [2024-12-07 11:50:34.325765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.291 qpair failed and we were unable to recover it. 00:38:35.291 [2024-12-07 11:50:34.326071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.326081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.326258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.326267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.326637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.326647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.326822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.326833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.327119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.327130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.327441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.327451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.327669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.327680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.328009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.328028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.328410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.328420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.328716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.328726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.328893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.328903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.329217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.329227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.329428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.329437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.329625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.329635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.329985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.329995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.330197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.330207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.330325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.330339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.330510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.330521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.330678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.330688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.330877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.330887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.331117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.331127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.331431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.331442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.331720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.331730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.332114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.332124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.332369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.332379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.332778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.332787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.333160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.333169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.333499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.333509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.292 qpair failed and we were unable to recover it. 00:38:35.292 [2024-12-07 11:50:34.333905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.292 [2024-12-07 11:50:34.333914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.334120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.334131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.334402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.334412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.334605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.334615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.334813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.334822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.334989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.334998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.335349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.335359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.335752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.335764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.336086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.336096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.336269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.336280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.336642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.336652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.336958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.336968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.337169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.337179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.337501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.337510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.337820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.337830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.338131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.338141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.338460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.338470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.338647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.338660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.339016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.339027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.339340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.339350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.339663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.339672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.339864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.339873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.340068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.340078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.340290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.340299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.340679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.340690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.340869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.340879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.341052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.341063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.341370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.341380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.293 [2024-12-07 11:50:34.341584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.293 [2024-12-07 11:50:34.341593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.293 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.341814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.341824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.342140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.342150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.342205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.342215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.342520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.342530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.342718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.342729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.343033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.343044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.343317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.343327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.343655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.343664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.343859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.343869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.344223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.344234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.344427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.344437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.344832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.344842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.345100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.345111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.345299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.345309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.345696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.345706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.345893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.345903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.346257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.346267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.346434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.346443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.346645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.346657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.346864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.346880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.347215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.347225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.347412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.347421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.347650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.347661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.347994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.348007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.348321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.348331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.348709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.348718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.349019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.349029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.349221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.349231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.349512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.294 [2024-12-07 11:50:34.349521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.294 qpair failed and we were unable to recover it. 00:38:35.294 [2024-12-07 11:50:34.349705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.349715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.350073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.350083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.350402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.350413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.350597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.350606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.350930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.350939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.351243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.351253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.351569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.351580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.351824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.351834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.352116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.352126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.352473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.352482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.352793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.352802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.352848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.352858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.353155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.353166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.353484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.353493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.353761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.353770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.354087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.354097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.354258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.354268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.354633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.354643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.354785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.354796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.355087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.355097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.355433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.355443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.355815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.355826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.355994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.356003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.356346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.356356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.356581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.356591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.356921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.356930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.357147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.357157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.357492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.295 [2024-12-07 11:50:34.357502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.295 qpair failed and we were unable to recover it. 00:38:35.295 [2024-12-07 11:50:34.357554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.357563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.357858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.357869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.358156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.358166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.358513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.358523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.358839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.358849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.359277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.359287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.359604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.359621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.359927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.359937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.360251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.360261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.360593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.360603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.360651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.360660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.360818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.360828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.361154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.361165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.361231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.361240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.361401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.361411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.361631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.361640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.361973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.361982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.362306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.362316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.362513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.362523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.362914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.362924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.363244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.363254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.363552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.363562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.363730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.363739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.363791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.363801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.363977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.363988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.364258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.364268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.364328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.364336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.364618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.296 [2024-12-07 11:50:34.364628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.296 qpair failed and we were unable to recover it. 00:38:35.296 [2024-12-07 11:50:34.364954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.364964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.365022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.365032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.365358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.365373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.365706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.365716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.366037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.366048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.366242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.366251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.366584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.366594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.366785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.366794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.366979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.366988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.367162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.367171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.367454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.367463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.367668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.367678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.368026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.368037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.368235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.368247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.368428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.368437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.368612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.368622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.368846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.368855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.369177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.369187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.369505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.369515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.369745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.369755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.369933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.369943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.370107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.370117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.370309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.370319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.297 [2024-12-07 11:50:34.370644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.297 [2024-12-07 11:50:34.370653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.297 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.370980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.370989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.371170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.371179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.371464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.371473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.371684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.371694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.372118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.372128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.372493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.372504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.372700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.372709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.373000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.373013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.373361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.373370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.373762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.373772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.374022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.374032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.374195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.374204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.374508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.374517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.374772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.374781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.375139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.375149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.375339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.375349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.375561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.375571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.375868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.375878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.376276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.376285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.376606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.376617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.376799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.376810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.376890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.376900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.376974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.376984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.377047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.377057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.377270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.377281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.377441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.377451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.377613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.377624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.377834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.298 [2024-12-07 11:50:34.377843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.298 qpair failed and we were unable to recover it. 00:38:35.298 [2024-12-07 11:50:34.378027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.378037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.378353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.378365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.378671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.378681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.378873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.378883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.379215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.379225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.379421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.379431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.379531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.379540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.379902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.379912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.380099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.380108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.380302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.380312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.380595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.380605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.380913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.380922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.381113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.381123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.381307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.381316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.381528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.381542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.381589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.381598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.381765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.381775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.381982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.381991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.382190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.382200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.382537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.382547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.382885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.382894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.383112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.383121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.383424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.383434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.383745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.383757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.383925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.383935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.384020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.384030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.384194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.384204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.384533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.384542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.384713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.299 [2024-12-07 11:50:34.384723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.299 qpair failed and we were unable to recover it. 00:38:35.299 [2024-12-07 11:50:34.384774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.384783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.385003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.385015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.385333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.385343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.385519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.385529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.385835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.385844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.386199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.386209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.386587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.386596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.386754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.386763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.386954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.386963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.387223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.387233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.387420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.387429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.387602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.387611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.387772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.387784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.387951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.387961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.388009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.388022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.388231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.388240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.388465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.388474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.388773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.388784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.389076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.389086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.389386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.389395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.389576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.389586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.389790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.389799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.389842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.389850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.390218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.390228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.390543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.390552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.390860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.390869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.391182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.391192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.391465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.391475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.391528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.391538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.391577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.391586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.391875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.300 [2024-12-07 11:50:34.391884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.300 qpair failed and we were unable to recover it. 00:38:35.300 [2024-12-07 11:50:34.392202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.392211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.392387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.392396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.392700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.392709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.392901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.392910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.393204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.393214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.393554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.393564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.393610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.393619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.393945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.393954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.394203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.394213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.394260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.394269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.394553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.394563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.394897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.394906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.395099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.395110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.395287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.395296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.395470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.395479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.395697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.395706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.395899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.395908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.396206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.396215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.396545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.396558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.396904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.396914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.397074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.397085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.397305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.397316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.397530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.397539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.397859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.397869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.397916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.397926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.398088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.398098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.398307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.398317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.398364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.398373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.398427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.398436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.398492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.301 [2024-12-07 11:50:34.398501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.301 qpair failed and we were unable to recover it. 00:38:35.301 [2024-12-07 11:50:34.398830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.398840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.399220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.399230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.399521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.399531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.399750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.399759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.399949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.399958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.400328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.400338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.400651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.400660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.400818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.400828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:35.302 [2024-12-07 11:50:34.401146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.401158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.401343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.401353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.401549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.401559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:35.302 [2024-12-07 11:50:34.401762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.401772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.401947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.401957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:35.302 [2024-12-07 11:50:34.402130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.402140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.302 [2024-12-07 11:50:34.402307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.402317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.402382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.402391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.402749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.402759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.403132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.403143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.403477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.403487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.403787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.403797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.404129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.404146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.404467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.404477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.404712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.404722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.405048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.405058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.405383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.405392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.405758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.302 [2024-12-07 11:50:34.405767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.302 qpair failed and we were unable to recover it. 00:38:35.302 [2024-12-07 11:50:34.405957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.405968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.406204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.406214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.406548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.406559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.406867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.406886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.407107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.407116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.407435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.407445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.407754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.407764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.408081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.408092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.408424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.408434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.408746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.408756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.409086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.409095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.409304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.409313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.409480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.409490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.409698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.409708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.409876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.409885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.410266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.410276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.410508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.410518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.410845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.410855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.411162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.411172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.411515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.411525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.411683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.411692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.411865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.411874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.412121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.412131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.412296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.412306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.303 [2024-12-07 11:50:34.412634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.303 [2024-12-07 11:50:34.412643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.303 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.412942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.412952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.413269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.413282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.413563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.413577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.413845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.413855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.414048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.414058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.414254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.414263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.414634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.414644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.415018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.415028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.415453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.415463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.415797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.415807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.416138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.416150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.416459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.416469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.416630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.416639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.416990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.417001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.417311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.417321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.417495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.417505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.417705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.417722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.417804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.417813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.418091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.418104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.418497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.418507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.418683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.418692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.419078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.419088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.419412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.419422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.419748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.419757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.420043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.420053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.420367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.420376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.420695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.420704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.421015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.421025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.421323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.304 [2024-12-07 11:50:34.421333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.304 qpair failed and we were unable to recover it. 00:38:35.304 [2024-12-07 11:50:34.421675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.421685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.421855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.421864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.422142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.422153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.422462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.422472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.422737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.422748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.423052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.423062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.423399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.423408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.423719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.423729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.423930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.423940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.424251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.424262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.424429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.424439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.424712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.424721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.425017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.425027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.425359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.425368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.425659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.425668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.425861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.425871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.426187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.426197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.426498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.426509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.426703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.426715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003aff00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.427095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.427146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.427512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.427531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.427857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.427872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.428193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.428210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.428380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.428395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.428586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.428599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.428939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.428953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.429151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.429166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.429343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.429357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.429699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.429713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.305 qpair failed and we were unable to recover it. 00:38:35.305 [2024-12-07 11:50:34.430061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.305 [2024-12-07 11:50:34.430081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.430299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.430314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.430657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.430673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.430986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.431000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.431197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.431213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.431498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.431512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.431817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.431833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.431884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.431897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.432202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.432216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.432513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.432527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.432710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.432724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.433130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.433148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.433347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.433361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.433765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.433779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.433947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.433961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.434261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.434275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.434476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.434490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.434798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.434813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.435140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.435155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.435485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.435499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.435810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.435823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.436024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.436039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.436394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.436408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.436694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.436709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.436902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.436916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.437245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.437260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.437472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.437487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.437675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.437690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.438034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.438049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.438336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.306 [2024-12-07 11:50:34.438349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.306 qpair failed and we were unable to recover it. 00:38:35.306 [2024-12-07 11:50:34.438521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.438535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.438873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.438887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.439224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.439238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.439435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.439450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.439776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.439791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.439970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.439983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.440390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.440405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.440616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.440629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.440970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.440985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.441187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.441201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.441538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.441554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.441764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.441778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:35.307 [2024-12-07 11:50:34.442122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.442139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:35.307 [2024-12-07 11:50:34.442465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.442480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.442666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.442681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.307 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.307 [2024-12-07 11:50:34.443026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.443042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.443363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.443377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.443719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.443733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.444075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.444089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.444403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.444415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.444728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.444742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.445065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.445079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.445369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.445382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.445583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.445595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.445801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.445814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.446026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.446041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.307 [2024-12-07 11:50:34.446291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.307 [2024-12-07 11:50:34.446305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.307 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.446498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.446511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.446728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.446744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.447031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.447046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.447353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.447369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.447562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.447575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.447783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.447797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.448138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.448153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.448506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.448522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.448841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.448858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.449184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.449200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.449384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.449398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.449751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.449765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.450105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.450119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.450207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.450220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.450413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.450427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.450728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.450742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.451073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.451088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.451177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.451191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.451460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.451473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.451668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.451683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.451988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.452002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.452221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.452236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.452423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.452437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.452813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.452828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.452885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.452899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.453240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.453255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.453593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.453607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.453948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.453962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.308 qpair failed and we were unable to recover it. 00:38:35.308 [2024-12-07 11:50:34.454288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.308 [2024-12-07 11:50:34.454303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.454471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.454487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.454804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.454819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.455138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.455152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.455491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.455506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.455687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.455702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.456100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.456116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.456457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.456471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.456682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.456696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.456886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.456900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.457104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.457119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.457453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.457468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.457658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.457674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.458004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.458023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.458225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.458239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.458384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.458398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.458594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.458609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.458947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.458961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.459300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.459314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.459632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.459647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.459705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.459721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.460018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.460034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.460383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.460397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.460471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.309 [2024-12-07 11:50:34.460486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.309 qpair failed and we were unable to recover it. 00:38:35.309 [2024-12-07 11:50:34.460773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.460787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.460999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.461018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.461217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.461230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.461407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.461421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.461718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.461733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.462070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.462086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.462352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.462367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.462698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.462712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.463058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.463073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.463454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.463468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.463651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.463666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.463985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.463999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.464330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.464345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.464540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.464554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.464735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.464749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.465069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.465084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.465297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.465311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.465652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.465666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.465912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.465926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.466132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.466147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.466511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.466526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.466722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.466737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.466910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.466924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.467226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.467241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.467435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.467450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.467629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.467642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.467872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.467885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.468220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.468234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.468571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.468586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.310 qpair failed and we were unable to recover it. 00:38:35.310 [2024-12-07 11:50:34.468772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.310 [2024-12-07 11:50:34.468789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.469114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.469129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.469528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.469542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.469730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.469743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.470090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.470105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.470312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.470326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.470514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.470529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.470832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.470848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.471171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.471186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.471395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.471410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.471717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.471730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.471796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.471809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.472106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.472121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.472299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.472312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.472540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.472555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.472912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.472927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.473306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.473322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.473631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.473645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.473966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.473979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.474306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.474320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.474485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.474499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.474694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.474707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.474885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.474899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.475129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.475144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.475493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.475508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.475841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.475855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.476158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.476173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.476372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.476386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.476748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.476763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.477088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.311 [2024-12-07 11:50:34.477102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.311 qpair failed and we were unable to recover it. 00:38:35.311 [2024-12-07 11:50:34.477356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.477370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.477754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.477767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.478070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.478085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.478280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.478293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.478587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.478601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.478670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.478683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.478964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.478978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.479196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.479210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.479492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.479505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.479895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.479909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.480194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.480209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.480538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.480551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.480722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.480735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.480793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.480806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.480877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.480890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.481215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.481229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.481567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.481580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.481907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.481923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.482287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.482303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.482637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.482651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.482978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.482992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.483322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.483336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.483653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.483667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.484002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.484020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.484366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.484380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.484647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.484661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.484976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.484991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.485311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.485325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.485505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.312 [2024-12-07 11:50:34.485519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.312 qpair failed and we were unable to recover it. 00:38:35.312 [2024-12-07 11:50:34.485865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.485879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.486064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.486078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.486455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.486469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.486664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.486677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.486871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.486885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.487194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.487208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.487421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.487434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.487763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.487777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.488076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.488091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.488181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.488194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.488408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.488421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.488731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.488745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.489138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.489152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.489291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.489305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.489471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.489484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.489822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.489835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.490132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.490147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.490555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.490569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.490855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.490869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.491099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.491113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.491407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.491421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.491617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.491630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.491909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.491922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.492203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.492217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.492527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.492541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.492602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.492617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.492901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.492915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.493227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.493242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.493446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.313 [2024-12-07 11:50:34.493463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.313 qpair failed and we were unable to recover it. 00:38:35.313 [2024-12-07 11:50:34.493808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.493823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.494102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.494116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.494312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.494326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.494384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.494396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.494624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.494637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.494972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.494986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.495192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.495206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.495539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.495553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.495616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.495629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.495871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.495884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.496139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.496153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.496454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.496468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.496785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.496798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.496981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.496995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.497273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.497287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.497611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.497624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.497852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.497866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.498185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.498200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.498603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.498617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.498904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.498918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.499243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.499257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.499473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.499487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.499809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.499822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.500124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.500139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.500460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.500474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.500696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.500710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.500879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.500892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.501114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.501128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.501481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.314 [2024-12-07 11:50:34.501494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.314 qpair failed and we were unable to recover it. 00:38:35.314 [2024-12-07 11:50:34.501689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.501702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.502034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.502049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.502376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.502390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.502738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.502754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.503088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.503103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.503279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.503292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.503618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.503632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.503940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.503954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.504132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.504146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.504319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.504334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.504621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.504637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.504982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.504998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.505358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.505372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.505770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.505783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.506079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.506093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.506419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.506432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.506829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.506842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.507149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.507163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.507489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.507503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.507562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.507575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.507860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.507873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.508183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.508197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.508493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.508507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.508688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.315 [2024-12-07 11:50:34.508702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.315 qpair failed and we were unable to recover it. 00:38:35.315 [2024-12-07 11:50:34.508985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.508999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.509312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.509328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.509504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.509518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.509838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.509853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.510025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.510040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.510353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.510366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.510682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.510695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.511032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.511046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.511357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.511371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.511556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.511570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.511760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.511773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.511947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.511961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.512206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.512220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.512458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.512472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.512745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.512759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.513092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.513107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.513406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.513419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.513704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.513718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.514014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.514029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.514369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.514383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.514681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.514694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.515004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.515022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.515412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.515425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.515600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.515613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.515839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.515853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.516024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.516038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.516278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.516293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 Malloc0 00:38:35.316 [2024-12-07 11:50:34.516500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.516529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 [2024-12-07 11:50:34.516614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.316 [2024-12-07 11:50:34.516635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.316 qpair failed and we were unable to recover it. 00:38:35.316 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.317 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:35.317 [2024-12-07 11:50:34.516923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.516948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.317 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.317 [2024-12-07 11:50:34.517274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.517301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.517483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.517499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.517804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.517819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.518139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.518153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.518473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.518487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.518840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.518854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.519174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.519188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.519372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.519386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.519769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.519783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.520001] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:35.317 [2024-12-07 11:50:34.520096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.520112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.520298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.520311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.520512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.520528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.520867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.520882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.521084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.521099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.521154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.521167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.521491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.521505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.521827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.521840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.522033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.522047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.522235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.522248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.522533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.522546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.522749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.522763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.523108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.523123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.523440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.523454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.523648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.523662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.523956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.523969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.524185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.317 [2024-12-07 11:50:34.524199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.317 qpair failed and we were unable to recover it. 00:38:35.317 [2024-12-07 11:50:34.524424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.524438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.524757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.524772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.525051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.525065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.525371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.525386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.525668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.525682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.525888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.525901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.526119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.526133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.526476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.526490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.526773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.526790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.526998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.527016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.527343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.527356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.527577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.527592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.527913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.527927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.528230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.528244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.528416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.528430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.528794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.318 [2024-12-07 11:50:34.528824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:35.318 [2024-12-07 11:50:34.529050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.318 [2024-12-07 11:50:34.529076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.318 [2024-12-07 11:50:34.529296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.529315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.529619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.529634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.529842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.529856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.530047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.530069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.530356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.530370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.530573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.530586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.530884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.530898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.531175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.531189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.531370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.531383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.318 [2024-12-07 11:50:34.531715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.318 [2024-12-07 11:50:34.531729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.318 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.531931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.531945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.532116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.532131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.532451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.532465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.532784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.532798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.532867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.532881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.533175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.533189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.533516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.533530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.533721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.533735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.534066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.534081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.534283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.534298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.534606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.534619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.534945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.534958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.535290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.535304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.535642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.535657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.535834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.535848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.536042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.536056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.536397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.536412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.319 [2024-12-07 11:50:34.536853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:35.319 [2024-12-07 11:50:34.536882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.319 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.319 [2024-12-07 11:50:34.537266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.537289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.537616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.537631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.537849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.537864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.538180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.538194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.538489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.538502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.538685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.538699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.538918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.538932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.539192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.539207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.539573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.539585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.319 [2024-12-07 11:50:34.539773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.319 [2024-12-07 11:50:34.539786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.319 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.539993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.540007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.540257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.540271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.540544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.540558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.540771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.540785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.540994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.541008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.541366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.541379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.541568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.541582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.541785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.541798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.542034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.542049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.542326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.542339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.542731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.542745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.543060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.543074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.543373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.543387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.543679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.543692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.544007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.544025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.544327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.544341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.544656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.544684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.320 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:35.320 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.320 [2024-12-07 11:50:34.545043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.545070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.320 [2024-12-07 11:50:34.545456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.545477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.545671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.545684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.546037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.546053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.546241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.546257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.546608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.546622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.546938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.546951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.547138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.547153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.547484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.547499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.320 [2024-12-07 11:50:34.547674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.320 [2024-12-07 11:50:34.547688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.320 qpair failed and we were unable to recover it. 00:38:35.321 [2024-12-07 11:50:34.547953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.321 [2024-12-07 11:50:34.547968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.321 qpair failed and we were unable to recover it. 00:38:35.321 [2024-12-07 11:50:34.548288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.321 [2024-12-07 11:50:34.548303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.321 qpair failed and we were unable to recover it. 00:38:35.321 [2024-12-07 11:50:34.548619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.321 [2024-12-07 11:50:34.548634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039ec00 with addr=10.0.0.2, port=4420 00:38:35.321 qpair failed and we were unable to recover it. 00:38:35.321 [2024-12-07 11:50:34.548680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:35.321 [2024-12-07 11:50:34.551417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.321 [2024-12-07 11:50:34.551526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.321 [2024-12-07 11:50:34.551554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.321 [2024-12-07 11:50:34.551570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.321 [2024-12-07 11:50:34.551581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.321 [2024-12-07 11:50:34.551611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.321 qpair failed and we were unable to recover it. 00:38:35.321 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.321 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:35.321 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.321 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.321 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.321 11:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2790015 00:38:35.321 [2024-12-07 11:50:34.561271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.321 [2024-12-07 11:50:34.561359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.321 [2024-12-07 11:50:34.561383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.321 [2024-12-07 11:50:34.561396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.321 [2024-12-07 11:50:34.561406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.321 [2024-12-07 11:50:34.561432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.321 qpair failed and we were unable to recover it. 00:38:35.321 [2024-12-07 11:50:34.571257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.321 [2024-12-07 11:50:34.571340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.321 [2024-12-07 11:50:34.571365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.321 [2024-12-07 11:50:34.571378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.321 [2024-12-07 11:50:34.571391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.321 [2024-12-07 11:50:34.571416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.321 qpair failed and we were unable to recover it. 00:38:35.321 [2024-12-07 11:50:34.581296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.321 [2024-12-07 11:50:34.581376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.321 [2024-12-07 11:50:34.581399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.321 [2024-12-07 11:50:34.581410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.321 [2024-12-07 11:50:34.581419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.321 [2024-12-07 11:50:34.581442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.321 qpair failed and we were unable to recover it. 00:38:35.321 [2024-12-07 11:50:34.591342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.321 [2024-12-07 11:50:34.591424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.321 [2024-12-07 11:50:34.591446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.321 [2024-12-07 11:50:34.591457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.321 [2024-12-07 11:50:34.591466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.321 [2024-12-07 11:50:34.591489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.321 qpair failed and we were unable to recover it. 00:38:35.321 [2024-12-07 11:50:34.601221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.321 [2024-12-07 11:50:34.601296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.321 [2024-12-07 11:50:34.601318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.321 [2024-12-07 11:50:34.601329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.321 [2024-12-07 11:50:34.601338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.321 [2024-12-07 11:50:34.601360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.321 qpair failed and we were unable to recover it. 00:38:35.321 [2024-12-07 11:50:34.611345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.321 [2024-12-07 11:50:34.611470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.321 [2024-12-07 11:50:34.611491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.321 [2024-12-07 11:50:34.611502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.321 [2024-12-07 11:50:34.611511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.322 [2024-12-07 11:50:34.611550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.322 qpair failed and we were unable to recover it. 00:38:35.322 [2024-12-07 11:50:34.621365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.322 [2024-12-07 11:50:34.621446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.322 [2024-12-07 11:50:34.621467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.322 [2024-12-07 11:50:34.621478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.322 [2024-12-07 11:50:34.621487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.322 [2024-12-07 11:50:34.621509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.322 qpair failed and we were unable to recover it. 00:38:35.322 [2024-12-07 11:50:34.631373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.322 [2024-12-07 11:50:34.631445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.322 [2024-12-07 11:50:34.631466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.322 [2024-12-07 11:50:34.631477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.322 [2024-12-07 11:50:34.631486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.322 [2024-12-07 11:50:34.631508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.322 qpair failed and we were unable to recover it. 00:38:35.585 [2024-12-07 11:50:34.641480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.585 [2024-12-07 11:50:34.641560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.585 [2024-12-07 11:50:34.641581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.585 [2024-12-07 11:50:34.641592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.585 [2024-12-07 11:50:34.641601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.585 [2024-12-07 11:50:34.641622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.585 qpair failed and we were unable to recover it. 00:38:35.585 [2024-12-07 11:50:34.651456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.585 [2024-12-07 11:50:34.651532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.585 [2024-12-07 11:50:34.651553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.585 [2024-12-07 11:50:34.651564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.585 [2024-12-07 11:50:34.651573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.585 [2024-12-07 11:50:34.651595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.585 qpair failed and we were unable to recover it. 00:38:35.585 [2024-12-07 11:50:34.661446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.585 [2024-12-07 11:50:34.661522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.585 [2024-12-07 11:50:34.661551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.585 [2024-12-07 11:50:34.661562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.585 [2024-12-07 11:50:34.661571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.585 [2024-12-07 11:50:34.661593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.585 qpair failed and we were unable to recover it. 00:38:35.585 [2024-12-07 11:50:34.671506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.585 [2024-12-07 11:50:34.671582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.585 [2024-12-07 11:50:34.671603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.585 [2024-12-07 11:50:34.671614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.585 [2024-12-07 11:50:34.671623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.585 [2024-12-07 11:50:34.671645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.585 qpair failed and we were unable to recover it. 00:38:35.585 [2024-12-07 11:50:34.681504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.585 [2024-12-07 11:50:34.681582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.585 [2024-12-07 11:50:34.681603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.585 [2024-12-07 11:50:34.681614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.585 [2024-12-07 11:50:34.681622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.585 [2024-12-07 11:50:34.681644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.585 qpair failed and we were unable to recover it. 00:38:35.585 [2024-12-07 11:50:34.691609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.585 [2024-12-07 11:50:34.691701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.585 [2024-12-07 11:50:34.691722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.585 [2024-12-07 11:50:34.691733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.585 [2024-12-07 11:50:34.691742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.585 [2024-12-07 11:50:34.691764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.585 qpair failed and we were unable to recover it. 00:38:35.585 [2024-12-07 11:50:34.701574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.701662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.586 [2024-12-07 11:50:34.701693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.586 [2024-12-07 11:50:34.701712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.586 [2024-12-07 11:50:34.701722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.586 [2024-12-07 11:50:34.701750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.586 qpair failed and we were unable to recover it. 00:38:35.586 [2024-12-07 11:50:34.711609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.711720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.586 [2024-12-07 11:50:34.711751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.586 [2024-12-07 11:50:34.711765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.586 [2024-12-07 11:50:34.711775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.586 [2024-12-07 11:50:34.711804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.586 qpair failed and we were unable to recover it. 00:38:35.586 [2024-12-07 11:50:34.721656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.721738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.586 [2024-12-07 11:50:34.721763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.586 [2024-12-07 11:50:34.721775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.586 [2024-12-07 11:50:34.721785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.586 [2024-12-07 11:50:34.721809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.586 qpair failed and we were unable to recover it. 00:38:35.586 [2024-12-07 11:50:34.731564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.731647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.586 [2024-12-07 11:50:34.731669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.586 [2024-12-07 11:50:34.731680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.586 [2024-12-07 11:50:34.731690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.586 [2024-12-07 11:50:34.731712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.586 qpair failed and we were unable to recover it. 00:38:35.586 [2024-12-07 11:50:34.741721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.741813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.586 [2024-12-07 11:50:34.741834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.586 [2024-12-07 11:50:34.741845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.586 [2024-12-07 11:50:34.741855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.586 [2024-12-07 11:50:34.741883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.586 qpair failed and we were unable to recover it. 00:38:35.586 [2024-12-07 11:50:34.751706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.751786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.586 [2024-12-07 11:50:34.751808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.586 [2024-12-07 11:50:34.751819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.586 [2024-12-07 11:50:34.751828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.586 [2024-12-07 11:50:34.751851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.586 qpair failed and we were unable to recover it. 00:38:35.586 [2024-12-07 11:50:34.761728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.761809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.586 [2024-12-07 11:50:34.761830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.586 [2024-12-07 11:50:34.761841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.586 [2024-12-07 11:50:34.761850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.586 [2024-12-07 11:50:34.761871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.586 qpair failed and we were unable to recover it. 00:38:35.586 [2024-12-07 11:50:34.771807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.771880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.586 [2024-12-07 11:50:34.771901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.586 [2024-12-07 11:50:34.771912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.586 [2024-12-07 11:50:34.771921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.586 [2024-12-07 11:50:34.771942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.586 qpair failed and we were unable to recover it. 00:38:35.586 [2024-12-07 11:50:34.781838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.781925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.586 [2024-12-07 11:50:34.781945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.586 [2024-12-07 11:50:34.781957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.586 [2024-12-07 11:50:34.781965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.586 [2024-12-07 11:50:34.781990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.586 qpair failed and we were unable to recover it. 00:38:35.586 [2024-12-07 11:50:34.791836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.791989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.586 [2024-12-07 11:50:34.792014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.586 [2024-12-07 11:50:34.792026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.586 [2024-12-07 11:50:34.792035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.586 [2024-12-07 11:50:34.792056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.586 qpair failed and we were unable to recover it. 00:38:35.586 [2024-12-07 11:50:34.801882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.801957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.586 [2024-12-07 11:50:34.801978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.586 [2024-12-07 11:50:34.801989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.586 [2024-12-07 11:50:34.801999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.586 [2024-12-07 11:50:34.802025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.586 qpair failed and we were unable to recover it. 00:38:35.586 [2024-12-07 11:50:34.811799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.811902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.586 [2024-12-07 11:50:34.811923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.586 [2024-12-07 11:50:34.811935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.586 [2024-12-07 11:50:34.811943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.586 [2024-12-07 11:50:34.811965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.586 qpair failed and we were unable to recover it. 00:38:35.586 [2024-12-07 11:50:34.821909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.822003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.586 [2024-12-07 11:50:34.822028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.586 [2024-12-07 11:50:34.822039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.586 [2024-12-07 11:50:34.822048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.586 [2024-12-07 11:50:34.822070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.586 qpair failed and we were unable to recover it. 00:38:35.586 [2024-12-07 11:50:34.831954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.586 [2024-12-07 11:50:34.832037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.587 [2024-12-07 11:50:34.832059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.587 [2024-12-07 11:50:34.832073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.587 [2024-12-07 11:50:34.832082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.587 [2024-12-07 11:50:34.832104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.587 qpair failed and we were unable to recover it. 00:38:35.587 [2024-12-07 11:50:34.841896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.587 [2024-12-07 11:50:34.841966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.587 [2024-12-07 11:50:34.841986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.587 [2024-12-07 11:50:34.841997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.587 [2024-12-07 11:50:34.842006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.587 [2024-12-07 11:50:34.842033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.587 qpair failed and we were unable to recover it. 00:38:35.587 [2024-12-07 11:50:34.851946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.587 [2024-12-07 11:50:34.852045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.587 [2024-12-07 11:50:34.852066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.587 [2024-12-07 11:50:34.852077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.587 [2024-12-07 11:50:34.852086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.587 [2024-12-07 11:50:34.852110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.587 qpair failed and we were unable to recover it. 00:38:35.587 [2024-12-07 11:50:34.861995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.587 [2024-12-07 11:50:34.862096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.587 [2024-12-07 11:50:34.862117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.587 [2024-12-07 11:50:34.862129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.587 [2024-12-07 11:50:34.862138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.587 [2024-12-07 11:50:34.862160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.587 qpair failed and we were unable to recover it. 00:38:35.587 [2024-12-07 11:50:34.872081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.587 [2024-12-07 11:50:34.872155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.587 [2024-12-07 11:50:34.872176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.587 [2024-12-07 11:50:34.872187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.587 [2024-12-07 11:50:34.872196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.587 [2024-12-07 11:50:34.872221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.587 qpair failed and we were unable to recover it. 00:38:35.587 [2024-12-07 11:50:34.882171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.587 [2024-12-07 11:50:34.882261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.587 [2024-12-07 11:50:34.882282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.587 [2024-12-07 11:50:34.882294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.587 [2024-12-07 11:50:34.882303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.587 [2024-12-07 11:50:34.882325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.587 qpair failed and we were unable to recover it. 00:38:35.587 [2024-12-07 11:50:34.892096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.587 [2024-12-07 11:50:34.892172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.587 [2024-12-07 11:50:34.892193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.587 [2024-12-07 11:50:34.892204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.587 [2024-12-07 11:50:34.892213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.587 [2024-12-07 11:50:34.892235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.587 qpair failed and we were unable to recover it. 00:38:35.587 [2024-12-07 11:50:34.902065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.587 [2024-12-07 11:50:34.902142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.587 [2024-12-07 11:50:34.902164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.587 [2024-12-07 11:50:34.902175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.587 [2024-12-07 11:50:34.902184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.587 [2024-12-07 11:50:34.902205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.587 qpair failed and we were unable to recover it. 00:38:35.587 [2024-12-07 11:50:34.912132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.587 [2024-12-07 11:50:34.912209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.587 [2024-12-07 11:50:34.912230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.587 [2024-12-07 11:50:34.912241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.587 [2024-12-07 11:50:34.912250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.587 [2024-12-07 11:50:34.912293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.587 qpair failed and we were unable to recover it. 00:38:35.587 [2024-12-07 11:50:34.922161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.587 [2024-12-07 11:50:34.922239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.587 [2024-12-07 11:50:34.922260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.587 [2024-12-07 11:50:34.922271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.587 [2024-12-07 11:50:34.922280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.587 [2024-12-07 11:50:34.922302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.587 qpair failed and we were unable to recover it. 00:38:35.587 [2024-12-07 11:50:34.932278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.587 [2024-12-07 11:50:34.932355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.587 [2024-12-07 11:50:34.932376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.587 [2024-12-07 11:50:34.932387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.587 [2024-12-07 11:50:34.932396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.587 [2024-12-07 11:50:34.932417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.587 qpair failed and we were unable to recover it. 00:38:35.856 [2024-12-07 11:50:34.942231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.856 [2024-12-07 11:50:34.942309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.856 [2024-12-07 11:50:34.942330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.856 [2024-12-07 11:50:34.942341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.856 [2024-12-07 11:50:34.942350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.856 [2024-12-07 11:50:34.942372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.856 qpair failed and we were unable to recover it. 00:38:35.856 [2024-12-07 11:50:34.952297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.856 [2024-12-07 11:50:34.952369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.856 [2024-12-07 11:50:34.952390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.856 [2024-12-07 11:50:34.952402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.856 [2024-12-07 11:50:34.952410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.856 [2024-12-07 11:50:34.952436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.856 qpair failed and we were unable to recover it. 00:38:35.856 [2024-12-07 11:50:34.962345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.856 [2024-12-07 11:50:34.962460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.856 [2024-12-07 11:50:34.962484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.856 [2024-12-07 11:50:34.962496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.856 [2024-12-07 11:50:34.962505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.856 [2024-12-07 11:50:34.962526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.856 qpair failed and we were unable to recover it. 00:38:35.856 [2024-12-07 11:50:34.972343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.856 [2024-12-07 11:50:34.972417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.856 [2024-12-07 11:50:34.972438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.856 [2024-12-07 11:50:34.972449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.856 [2024-12-07 11:50:34.972458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.856 [2024-12-07 11:50:34.972480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.856 qpair failed and we were unable to recover it. 00:38:35.856 [2024-12-07 11:50:34.982381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.856 [2024-12-07 11:50:34.982495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.856 [2024-12-07 11:50:34.982517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.856 [2024-12-07 11:50:34.982527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.856 [2024-12-07 11:50:34.982536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.856 [2024-12-07 11:50:34.982557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.856 qpair failed and we were unable to recover it. 00:38:35.856 [2024-12-07 11:50:34.992421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.856 [2024-12-07 11:50:34.992493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.856 [2024-12-07 11:50:34.992514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.856 [2024-12-07 11:50:34.992525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.856 [2024-12-07 11:50:34.992534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.856 [2024-12-07 11:50:34.992555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.856 qpair failed and we were unable to recover it. 00:38:35.857 [2024-12-07 11:50:35.002438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.857 [2024-12-07 11:50:35.002512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.857 [2024-12-07 11:50:35.002534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.857 [2024-12-07 11:50:35.002545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.857 [2024-12-07 11:50:35.002557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.857 [2024-12-07 11:50:35.002761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.857 qpair failed and we were unable to recover it. 00:38:35.857 [2024-12-07 11:50:35.012451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.857 [2024-12-07 11:50:35.012538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.857 [2024-12-07 11:50:35.012561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.857 [2024-12-07 11:50:35.012573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.857 [2024-12-07 11:50:35.012583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.857 [2024-12-07 11:50:35.012606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.857 qpair failed and we were unable to recover it. 00:38:35.857 [2024-12-07 11:50:35.022451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.857 [2024-12-07 11:50:35.022562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.857 [2024-12-07 11:50:35.022583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.857 [2024-12-07 11:50:35.022594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.857 [2024-12-07 11:50:35.022603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.857 [2024-12-07 11:50:35.022625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.857 qpair failed and we were unable to recover it. 00:38:35.857 [2024-12-07 11:50:35.032484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.857 [2024-12-07 11:50:35.032557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.857 [2024-12-07 11:50:35.032578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.857 [2024-12-07 11:50:35.032589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.857 [2024-12-07 11:50:35.032598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.857 [2024-12-07 11:50:35.032619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.857 qpair failed and we were unable to recover it. 00:38:35.857 [2024-12-07 11:50:35.042550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.857 [2024-12-07 11:50:35.042670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.857 [2024-12-07 11:50:35.042692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.857 [2024-12-07 11:50:35.042703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.857 [2024-12-07 11:50:35.042712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.857 [2024-12-07 11:50:35.042735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.857 qpair failed and we were unable to recover it. 00:38:35.857 [2024-12-07 11:50:35.052555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.857 [2024-12-07 11:50:35.052624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.857 [2024-12-07 11:50:35.052646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.857 [2024-12-07 11:50:35.052657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.857 [2024-12-07 11:50:35.052666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.857 [2024-12-07 11:50:35.052688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.857 qpair failed and we were unable to recover it. 00:38:35.857 [2024-12-07 11:50:35.062624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.857 [2024-12-07 11:50:35.062744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.857 [2024-12-07 11:50:35.062765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.857 [2024-12-07 11:50:35.062776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.857 [2024-12-07 11:50:35.062786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.857 [2024-12-07 11:50:35.062808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.857 qpair failed and we were unable to recover it. 00:38:35.857 [2024-12-07 11:50:35.072641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.857 [2024-12-07 11:50:35.072721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.857 [2024-12-07 11:50:35.072741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.857 [2024-12-07 11:50:35.072753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.857 [2024-12-07 11:50:35.072762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.857 [2024-12-07 11:50:35.072785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.857 qpair failed and we were unable to recover it. 00:38:35.857 [2024-12-07 11:50:35.082631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.857 [2024-12-07 11:50:35.082705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.857 [2024-12-07 11:50:35.082726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.857 [2024-12-07 11:50:35.082737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.857 [2024-12-07 11:50:35.082746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.857 [2024-12-07 11:50:35.082767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.857 qpair failed and we were unable to recover it. 00:38:35.857 [2024-12-07 11:50:35.092652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.857 [2024-12-07 11:50:35.092741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.857 [2024-12-07 11:50:35.092778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.857 [2024-12-07 11:50:35.092792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.857 [2024-12-07 11:50:35.092803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.857 [2024-12-07 11:50:35.092831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.858 qpair failed and we were unable to recover it. 00:38:35.858 [2024-12-07 11:50:35.102720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.858 [2024-12-07 11:50:35.102810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.858 [2024-12-07 11:50:35.102841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.858 [2024-12-07 11:50:35.102855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.858 [2024-12-07 11:50:35.102866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.858 [2024-12-07 11:50:35.102894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.858 qpair failed and we were unable to recover it. 00:38:35.858 [2024-12-07 11:50:35.112723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.858 [2024-12-07 11:50:35.112808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.858 [2024-12-07 11:50:35.112832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.858 [2024-12-07 11:50:35.112844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.858 [2024-12-07 11:50:35.112853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.858 [2024-12-07 11:50:35.112877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.858 qpair failed and we were unable to recover it. 00:38:35.858 [2024-12-07 11:50:35.122766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.858 [2024-12-07 11:50:35.122839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.858 [2024-12-07 11:50:35.122861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.858 [2024-12-07 11:50:35.122873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.858 [2024-12-07 11:50:35.122882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.858 [2024-12-07 11:50:35.122909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.858 qpair failed and we were unable to recover it. 00:38:35.858 [2024-12-07 11:50:35.132733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.858 [2024-12-07 11:50:35.132805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.858 [2024-12-07 11:50:35.132826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.858 [2024-12-07 11:50:35.132838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.858 [2024-12-07 11:50:35.132852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.858 [2024-12-07 11:50:35.132876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.858 qpair failed and we were unable to recover it. 00:38:35.858 [2024-12-07 11:50:35.142726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.858 [2024-12-07 11:50:35.142808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.858 [2024-12-07 11:50:35.142829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.858 [2024-12-07 11:50:35.142840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.858 [2024-12-07 11:50:35.142849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.858 [2024-12-07 11:50:35.142871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.858 qpair failed and we were unable to recover it. 00:38:35.858 [2024-12-07 11:50:35.152769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.858 [2024-12-07 11:50:35.152849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.858 [2024-12-07 11:50:35.152871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.858 [2024-12-07 11:50:35.152882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.858 [2024-12-07 11:50:35.152891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.858 [2024-12-07 11:50:35.152913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.858 qpair failed and we were unable to recover it. 00:38:35.858 [2024-12-07 11:50:35.162884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.858 [2024-12-07 11:50:35.162971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.858 [2024-12-07 11:50:35.162992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.858 [2024-12-07 11:50:35.163004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.859 [2024-12-07 11:50:35.163020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.859 [2024-12-07 11:50:35.163043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.859 qpair failed and we were unable to recover it. 00:38:35.859 [2024-12-07 11:50:35.172919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.859 [2024-12-07 11:50:35.172997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.859 [2024-12-07 11:50:35.173023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.859 [2024-12-07 11:50:35.173040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.859 [2024-12-07 11:50:35.173049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.859 [2024-12-07 11:50:35.173071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.859 qpair failed and we were unable to recover it. 00:38:35.859 [2024-12-07 11:50:35.182920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.859 [2024-12-07 11:50:35.182993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.859 [2024-12-07 11:50:35.183019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.859 [2024-12-07 11:50:35.183031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.859 [2024-12-07 11:50:35.183040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.859 [2024-12-07 11:50:35.183062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.859 qpair failed and we were unable to recover it. 00:38:35.859 [2024-12-07 11:50:35.192897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.859 [2024-12-07 11:50:35.192981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.859 [2024-12-07 11:50:35.193002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.859 [2024-12-07 11:50:35.193017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.859 [2024-12-07 11:50:35.193027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.859 [2024-12-07 11:50:35.193049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.859 qpair failed and we were unable to recover it. 00:38:35.859 [2024-12-07 11:50:35.202988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.859 [2024-12-07 11:50:35.203071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.859 [2024-12-07 11:50:35.203092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.859 [2024-12-07 11:50:35.203104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.859 [2024-12-07 11:50:35.203113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:35.859 [2024-12-07 11:50:35.203134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.859 qpair failed and we were unable to recover it. 00:38:36.123 [2024-12-07 11:50:35.212992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.123 [2024-12-07 11:50:35.213074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.123 [2024-12-07 11:50:35.213096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.123 [2024-12-07 11:50:35.213107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.123 [2024-12-07 11:50:35.213116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.123 [2024-12-07 11:50:35.213138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.123 qpair failed and we were unable to recover it. 00:38:36.123 [2024-12-07 11:50:35.223061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.123 [2024-12-07 11:50:35.223143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.123 [2024-12-07 11:50:35.223166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.123 [2024-12-07 11:50:35.223179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.123 [2024-12-07 11:50:35.223187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.123 [2024-12-07 11:50:35.223210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.123 qpair failed and we were unable to recover it. 00:38:36.123 [2024-12-07 11:50:35.233056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.123 [2024-12-07 11:50:35.233177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.123 [2024-12-07 11:50:35.233198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.123 [2024-12-07 11:50:35.233210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.123 [2024-12-07 11:50:35.233219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.123 [2024-12-07 11:50:35.233241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.123 qpair failed and we were unable to recover it. 00:38:36.123 [2024-12-07 11:50:35.243095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.123 [2024-12-07 11:50:35.243181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.123 [2024-12-07 11:50:35.243202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.123 [2024-12-07 11:50:35.243213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.123 [2024-12-07 11:50:35.243222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.123 [2024-12-07 11:50:35.243244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.123 qpair failed and we were unable to recover it. 00:38:36.123 [2024-12-07 11:50:35.253106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.123 [2024-12-07 11:50:35.253187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.123 [2024-12-07 11:50:35.253208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.123 [2024-12-07 11:50:35.253220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.123 [2024-12-07 11:50:35.253228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.123 [2024-12-07 11:50:35.253250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.123 qpair failed and we were unable to recover it. 00:38:36.123 [2024-12-07 11:50:35.263128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.123 [2024-12-07 11:50:35.263210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.123 [2024-12-07 11:50:35.263231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.123 [2024-12-07 11:50:35.263246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.123 [2024-12-07 11:50:35.263255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.123 [2024-12-07 11:50:35.263277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.123 qpair failed and we were unable to recover it. 00:38:36.123 [2024-12-07 11:50:35.273224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.123 [2024-12-07 11:50:35.273301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.123 [2024-12-07 11:50:35.273321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.123 [2024-12-07 11:50:35.273333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.123 [2024-12-07 11:50:35.273342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.273364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.283193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.283272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.283293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.283304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.283314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.283335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.293191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.293268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.293289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.293300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.293309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.293334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.303182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.303306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.303327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.303339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.303347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.303373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.313339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.313412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.313433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.313445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.313454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.313475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.323262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.323334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.323356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.323367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.323376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.323397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.333306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.333377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.333398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.333409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.333418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.333440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.343369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.343447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.343467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.343479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.343488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.343509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.353398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.353477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.353499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.353510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.353519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.353541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.363471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.363554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.363575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.363586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.363595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.363616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.373466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.373542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.373563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.373574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.373583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.373604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.383482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.383565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.383586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.383597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.383606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.383628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.393520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.393596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.393616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.393630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.393639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.393661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.403507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.403587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.403608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.403619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.403628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.403650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.413550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.413628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.413650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.413661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.413670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.413691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.423590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.423671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.423692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.423703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.423712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.423733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.433807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.433930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.433962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.433976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.433986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.434026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.443606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.443682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.443706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.443717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.443727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.443750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.453597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.453678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.453700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.453712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.453721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.453745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.124 [2024-12-07 11:50:35.463621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.124 [2024-12-07 11:50:35.463719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.124 [2024-12-07 11:50:35.463741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.124 [2024-12-07 11:50:35.463752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.124 [2024-12-07 11:50:35.463761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.124 [2024-12-07 11:50:35.463786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.124 qpair failed and we were unable to recover it. 00:38:36.386 [2024-12-07 11:50:35.473722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.386 [2024-12-07 11:50:35.473812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.386 [2024-12-07 11:50:35.473844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.386 [2024-12-07 11:50:35.473858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.386 [2024-12-07 11:50:35.473869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.386 [2024-12-07 11:50:35.473897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.386 qpair failed and we were unable to recover it. 00:38:36.386 [2024-12-07 11:50:35.483954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.386 [2024-12-07 11:50:35.484038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.386 [2024-12-07 11:50:35.484062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.386 [2024-12-07 11:50:35.484074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.386 [2024-12-07 11:50:35.484084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.386 [2024-12-07 11:50:35.484108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.386 qpair failed and we were unable to recover it. 00:38:36.386 [2024-12-07 11:50:35.493712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.386 [2024-12-07 11:50:35.493781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.386 [2024-12-07 11:50:35.493803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.386 [2024-12-07 11:50:35.493814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.386 [2024-12-07 11:50:35.493823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.386 [2024-12-07 11:50:35.493845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.386 qpair failed and we were unable to recover it. 00:38:36.386 [2024-12-07 11:50:35.503786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.386 [2024-12-07 11:50:35.503873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.386 [2024-12-07 11:50:35.503894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.386 [2024-12-07 11:50:35.503906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.386 [2024-12-07 11:50:35.503915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.386 [2024-12-07 11:50:35.503938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.386 qpair failed and we were unable to recover it. 00:38:36.386 [2024-12-07 11:50:35.513875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.386 [2024-12-07 11:50:35.513993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.386 [2024-12-07 11:50:35.514019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.386 [2024-12-07 11:50:35.514031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.386 [2024-12-07 11:50:35.514040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.386 [2024-12-07 11:50:35.514062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.386 qpair failed and we were unable to recover it. 00:38:36.386 [2024-12-07 11:50:35.523893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.386 [2024-12-07 11:50:35.523968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.386 [2024-12-07 11:50:35.523992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.386 [2024-12-07 11:50:35.524004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.386 [2024-12-07 11:50:35.524018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.386 [2024-12-07 11:50:35.524040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.386 qpair failed and we were unable to recover it. 00:38:36.386 [2024-12-07 11:50:35.533886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.386 [2024-12-07 11:50:35.533965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.386 [2024-12-07 11:50:35.533985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.386 [2024-12-07 11:50:35.533996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.386 [2024-12-07 11:50:35.534005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.386 [2024-12-07 11:50:35.534034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.386 qpair failed and we were unable to recover it. 00:38:36.386 [2024-12-07 11:50:35.543903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.386 [2024-12-07 11:50:35.543981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.386 [2024-12-07 11:50:35.544003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.386 [2024-12-07 11:50:35.544023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.386 [2024-12-07 11:50:35.544033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.386 [2024-12-07 11:50:35.544055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.386 qpair failed and we were unable to recover it. 00:38:36.386 [2024-12-07 11:50:35.553965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.386 [2024-12-07 11:50:35.554039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.386 [2024-12-07 11:50:35.554061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.386 [2024-12-07 11:50:35.554072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.386 [2024-12-07 11:50:35.554081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.386 [2024-12-07 11:50:35.554104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.386 qpair failed and we were unable to recover it. 00:38:36.386 [2024-12-07 11:50:35.563914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.386 [2024-12-07 11:50:35.563992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.386 [2024-12-07 11:50:35.564018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.386 [2024-12-07 11:50:35.564030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.386 [2024-12-07 11:50:35.564043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.386 [2024-12-07 11:50:35.564065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.386 qpair failed and we were unable to recover it. 00:38:36.386 [2024-12-07 11:50:35.573904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.386 [2024-12-07 11:50:35.573971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.386 [2024-12-07 11:50:35.573991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.386 [2024-12-07 11:50:35.574002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.386 [2024-12-07 11:50:35.574015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.386 [2024-12-07 11:50:35.574039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.386 qpair failed and we were unable to recover it. 00:38:36.386 [2024-12-07 11:50:35.584055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.387 [2024-12-07 11:50:35.584134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.387 [2024-12-07 11:50:35.584155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.387 [2024-12-07 11:50:35.584166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.387 [2024-12-07 11:50:35.584175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.387 [2024-12-07 11:50:35.584197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.387 qpair failed and we were unable to recover it. 00:38:36.387 [2024-12-07 11:50:35.593985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.387 [2024-12-07 11:50:35.594068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.387 [2024-12-07 11:50:35.594089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.387 [2024-12-07 11:50:35.594100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.387 [2024-12-07 11:50:35.594108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.387 [2024-12-07 11:50:35.594130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.387 qpair failed and we were unable to recover it. 00:38:36.387 [2024-12-07 11:50:35.604080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.387 [2024-12-07 11:50:35.604150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.387 [2024-12-07 11:50:35.604172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.387 [2024-12-07 11:50:35.604183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.387 [2024-12-07 11:50:35.604192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.387 [2024-12-07 11:50:35.604214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.387 qpair failed and we were unable to recover it. 00:38:36.387 [2024-12-07 11:50:35.614108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.387 [2024-12-07 11:50:35.614177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.387 [2024-12-07 11:50:35.614198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.387 [2024-12-07 11:50:35.614209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.387 [2024-12-07 11:50:35.614218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.387 [2024-12-07 11:50:35.614240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.387 qpair failed and we were unable to recover it. 00:38:36.387 [2024-12-07 11:50:35.624183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.387 [2024-12-07 11:50:35.624285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.387 [2024-12-07 11:50:35.624306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.387 [2024-12-07 11:50:35.624317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.387 [2024-12-07 11:50:35.624326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.387 [2024-12-07 11:50:35.624347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.387 qpair failed and we were unable to recover it. 00:38:36.387 [2024-12-07 11:50:35.634123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.387 [2024-12-07 11:50:35.634200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.387 [2024-12-07 11:50:35.634220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.387 [2024-12-07 11:50:35.634231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.387 [2024-12-07 11:50:35.634241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.387 [2024-12-07 11:50:35.634266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.387 qpair failed and we were unable to recover it. 00:38:36.387 [2024-12-07 11:50:35.644214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.387 [2024-12-07 11:50:35.644301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.387 [2024-12-07 11:50:35.644321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.387 [2024-12-07 11:50:35.644332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.387 [2024-12-07 11:50:35.644340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.387 [2024-12-07 11:50:35.644362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.387 qpair failed and we were unable to recover it. 00:38:36.387 [2024-12-07 11:50:35.654414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.387 [2024-12-07 11:50:35.654489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.387 [2024-12-07 11:50:35.654513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.387 [2024-12-07 11:50:35.654524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.387 [2024-12-07 11:50:35.654533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.387 [2024-12-07 11:50:35.654555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.387 qpair failed and we were unable to recover it. 00:38:36.387 [2024-12-07 11:50:35.664177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.387 [2024-12-07 11:50:35.664253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.387 [2024-12-07 11:50:35.664274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.387 [2024-12-07 11:50:35.664285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.387 [2024-12-07 11:50:35.664294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.387 [2024-12-07 11:50:35.664316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.387 qpair failed and we were unable to recover it. 00:38:36.387 [2024-12-07 11:50:35.674271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.387 [2024-12-07 11:50:35.674367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.387 [2024-12-07 11:50:35.674388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.387 [2024-12-07 11:50:35.674399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.387 [2024-12-07 11:50:35.674408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.387 [2024-12-07 11:50:35.674429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.387 qpair failed and we were unable to recover it. 00:38:36.387 [2024-12-07 11:50:35.684316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.387 [2024-12-07 11:50:35.684393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.387 [2024-12-07 11:50:35.684414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.387 [2024-12-07 11:50:35.684425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.387 [2024-12-07 11:50:35.684440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.387 [2024-12-07 11:50:35.684462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.387 qpair failed and we were unable to recover it. 00:38:36.387 [2024-12-07 11:50:35.694269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.387 [2024-12-07 11:50:35.694347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.387 [2024-12-07 11:50:35.694367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.387 [2024-12-07 11:50:35.694378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.387 [2024-12-07 11:50:35.694391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.387 [2024-12-07 11:50:35.694412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.387 qpair failed and we were unable to recover it. 00:38:36.387 [2024-12-07 11:50:35.704382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.388 [2024-12-07 11:50:35.704466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.388 [2024-12-07 11:50:35.704487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.388 [2024-12-07 11:50:35.704498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.388 [2024-12-07 11:50:35.704507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.388 [2024-12-07 11:50:35.704529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.388 qpair failed and we were unable to recover it. 00:38:36.388 [2024-12-07 11:50:35.714391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.388 [2024-12-07 11:50:35.714459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.388 [2024-12-07 11:50:35.714479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.388 [2024-12-07 11:50:35.714490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.388 [2024-12-07 11:50:35.714499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.388 [2024-12-07 11:50:35.714520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.388 qpair failed and we were unable to recover it. 00:38:36.388 [2024-12-07 11:50:35.724430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.388 [2024-12-07 11:50:35.724498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.388 [2024-12-07 11:50:35.724519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.388 [2024-12-07 11:50:35.724530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.388 [2024-12-07 11:50:35.724538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.388 [2024-12-07 11:50:35.724560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.388 qpair failed and we were unable to recover it. 00:38:36.388 [2024-12-07 11:50:35.734534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.388 [2024-12-07 11:50:35.734608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.388 [2024-12-07 11:50:35.734629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.388 [2024-12-07 11:50:35.734640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.388 [2024-12-07 11:50:35.734649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.388 [2024-12-07 11:50:35.734671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.388 qpair failed and we were unable to recover it. 00:38:36.650 [2024-12-07 11:50:35.744490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.650 [2024-12-07 11:50:35.744580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.650 [2024-12-07 11:50:35.744601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.650 [2024-12-07 11:50:35.744612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.650 [2024-12-07 11:50:35.744621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.650 [2024-12-07 11:50:35.744643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.651 qpair failed and we were unable to recover it. 00:38:36.651 [2024-12-07 11:50:35.754487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.651 [2024-12-07 11:50:35.754563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.651 [2024-12-07 11:50:35.754583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.651 [2024-12-07 11:50:35.754595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.651 [2024-12-07 11:50:35.754604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.651 [2024-12-07 11:50:35.754625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.651 qpair failed and we were unable to recover it. 00:38:36.651 [2024-12-07 11:50:35.764605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.651 [2024-12-07 11:50:35.764707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.651 [2024-12-07 11:50:35.764728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.651 [2024-12-07 11:50:35.764740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.651 [2024-12-07 11:50:35.764749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.651 [2024-12-07 11:50:35.764770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.651 qpair failed and we were unable to recover it. 00:38:36.651 [2024-12-07 11:50:35.774544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.651 [2024-12-07 11:50:35.774647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.651 [2024-12-07 11:50:35.774678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.651 [2024-12-07 11:50:35.774692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.651 [2024-12-07 11:50:35.774702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.651 [2024-12-07 11:50:35.774730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.651 qpair failed and we were unable to recover it. 00:38:36.651 [2024-12-07 11:50:35.784590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.651 [2024-12-07 11:50:35.784700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.651 [2024-12-07 11:50:35.784723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.651 [2024-12-07 11:50:35.784735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.651 [2024-12-07 11:50:35.784744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.651 [2024-12-07 11:50:35.784768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.651 qpair failed and we were unable to recover it. 00:38:36.651 [2024-12-07 11:50:35.794632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.651 [2024-12-07 11:50:35.794721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.651 [2024-12-07 11:50:35.794752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.651 [2024-12-07 11:50:35.794766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.651 [2024-12-07 11:50:35.794777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.651 [2024-12-07 11:50:35.794805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.651 qpair failed and we were unable to recover it. 00:38:36.651 [2024-12-07 11:50:35.804574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.651 [2024-12-07 11:50:35.804651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.651 [2024-12-07 11:50:35.804674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.651 [2024-12-07 11:50:35.804686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.651 [2024-12-07 11:50:35.804696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.651 [2024-12-07 11:50:35.804723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.651 qpair failed and we were unable to recover it. 00:38:36.651 [2024-12-07 11:50:35.814667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.651 [2024-12-07 11:50:35.814740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.651 [2024-12-07 11:50:35.814762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.651 [2024-12-07 11:50:35.814773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.651 [2024-12-07 11:50:35.814782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.651 [2024-12-07 11:50:35.814804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.651 qpair failed and we were unable to recover it. 00:38:36.651 [2024-12-07 11:50:35.824708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.651 [2024-12-07 11:50:35.824808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.651 [2024-12-07 11:50:35.824839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.651 [2024-12-07 11:50:35.824858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.651 [2024-12-07 11:50:35.824868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.651 [2024-12-07 11:50:35.824897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.651 qpair failed and we were unable to recover it. 00:38:36.651 [2024-12-07 11:50:35.834732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.651 [2024-12-07 11:50:35.834826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.651 [2024-12-07 11:50:35.834857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.651 [2024-12-07 11:50:35.834871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.651 [2024-12-07 11:50:35.834881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.651 [2024-12-07 11:50:35.834909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.651 qpair failed and we were unable to recover it. 00:38:36.651 [2024-12-07 11:50:35.844784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.651 [2024-12-07 11:50:35.844859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.651 [2024-12-07 11:50:35.844882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.651 [2024-12-07 11:50:35.844893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.651 [2024-12-07 11:50:35.844903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.651 [2024-12-07 11:50:35.844926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.652 qpair failed and we were unable to recover it. 00:38:36.652 [2024-12-07 11:50:35.854795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.652 [2024-12-07 11:50:35.854870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.652 [2024-12-07 11:50:35.854892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.652 [2024-12-07 11:50:35.854903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.652 [2024-12-07 11:50:35.854912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.652 [2024-12-07 11:50:35.854935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.652 qpair failed and we were unable to recover it. 00:38:36.652 [2024-12-07 11:50:35.864863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.652 [2024-12-07 11:50:35.864981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.652 [2024-12-07 11:50:35.865003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.652 [2024-12-07 11:50:35.865019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.652 [2024-12-07 11:50:35.865028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.652 [2024-12-07 11:50:35.865054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.652 qpair failed and we were unable to recover it. 00:38:36.652 [2024-12-07 11:50:35.874832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.652 [2024-12-07 11:50:35.874905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.652 [2024-12-07 11:50:35.874926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.652 [2024-12-07 11:50:35.874940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.652 [2024-12-07 11:50:35.874950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.652 [2024-12-07 11:50:35.874972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.652 qpair failed and we were unable to recover it. 00:38:36.652 [2024-12-07 11:50:35.884765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.652 [2024-12-07 11:50:35.884863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.652 [2024-12-07 11:50:35.884885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.652 [2024-12-07 11:50:35.884896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.652 [2024-12-07 11:50:35.884905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.652 [2024-12-07 11:50:35.884927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.652 qpair failed and we were unable to recover it. 00:38:36.652 [2024-12-07 11:50:35.894884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.652 [2024-12-07 11:50:35.894956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.652 [2024-12-07 11:50:35.894977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.652 [2024-12-07 11:50:35.894989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.652 [2024-12-07 11:50:35.894998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.652 [2024-12-07 11:50:35.895033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.652 qpair failed and we were unable to recover it. 00:38:36.652 [2024-12-07 11:50:35.904881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.652 [2024-12-07 11:50:35.904962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.652 [2024-12-07 11:50:35.904983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.652 [2024-12-07 11:50:35.904995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.652 [2024-12-07 11:50:35.905004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.652 [2024-12-07 11:50:35.905030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.652 qpair failed and we were unable to recover it. 00:38:36.652 [2024-12-07 11:50:35.914965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.652 [2024-12-07 11:50:35.915078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.652 [2024-12-07 11:50:35.915100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.652 [2024-12-07 11:50:35.915111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.652 [2024-12-07 11:50:35.915120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.652 [2024-12-07 11:50:35.915142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.652 qpair failed and we were unable to recover it. 00:38:36.652 [2024-12-07 11:50:35.924957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.652 [2024-12-07 11:50:35.925033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.652 [2024-12-07 11:50:35.925054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.652 [2024-12-07 11:50:35.925065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.652 [2024-12-07 11:50:35.925075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.652 [2024-12-07 11:50:35.925097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.652 qpair failed and we were unable to recover it. 00:38:36.652 [2024-12-07 11:50:35.934982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.652 [2024-12-07 11:50:35.935057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.652 [2024-12-07 11:50:35.935078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.652 [2024-12-07 11:50:35.935089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.652 [2024-12-07 11:50:35.935098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.652 [2024-12-07 11:50:35.935119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.652 qpair failed and we were unable to recover it. 00:38:36.652 [2024-12-07 11:50:35.945051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.652 [2024-12-07 11:50:35.945125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.652 [2024-12-07 11:50:35.945151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.652 [2024-12-07 11:50:35.945162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.652 [2024-12-07 11:50:35.945171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.652 [2024-12-07 11:50:35.945193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.652 qpair failed and we were unable to recover it. 00:38:36.652 [2024-12-07 11:50:35.955075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.653 [2024-12-07 11:50:35.955148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.653 [2024-12-07 11:50:35.955169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.653 [2024-12-07 11:50:35.955191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.653 [2024-12-07 11:50:35.955200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.653 [2024-12-07 11:50:35.955222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.653 qpair failed and we were unable to recover it. 00:38:36.653 [2024-12-07 11:50:35.965292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.653 [2024-12-07 11:50:35.965364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.653 [2024-12-07 11:50:35.965385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.653 [2024-12-07 11:50:35.965397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.653 [2024-12-07 11:50:35.965406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.653 [2024-12-07 11:50:35.965428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.653 qpair failed and we were unable to recover it. 00:38:36.653 [2024-12-07 11:50:35.975053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.653 [2024-12-07 11:50:35.975119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.653 [2024-12-07 11:50:35.975140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.653 [2024-12-07 11:50:35.975151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.653 [2024-12-07 11:50:35.975160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.653 [2024-12-07 11:50:35.975198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.653 qpair failed and we were unable to recover it. 00:38:36.653 [2024-12-07 11:50:35.985126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.653 [2024-12-07 11:50:35.985210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.653 [2024-12-07 11:50:35.985231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.653 [2024-12-07 11:50:35.985242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.653 [2024-12-07 11:50:35.985251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.653 [2024-12-07 11:50:35.985273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.653 qpair failed and we were unable to recover it. 00:38:36.653 [2024-12-07 11:50:35.995203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.653 [2024-12-07 11:50:35.995289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.653 [2024-12-07 11:50:35.995310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.653 [2024-12-07 11:50:35.995321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.653 [2024-12-07 11:50:35.995330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.653 [2024-12-07 11:50:35.995354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.653 qpair failed and we were unable to recover it. 00:38:36.917 [2024-12-07 11:50:36.005153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.917 [2024-12-07 11:50:36.005233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.917 [2024-12-07 11:50:36.005257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.917 [2024-12-07 11:50:36.005269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.917 [2024-12-07 11:50:36.005279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.917 [2024-12-07 11:50:36.005301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.917 qpair failed and we were unable to recover it. 00:38:36.917 [2024-12-07 11:50:36.015229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.917 [2024-12-07 11:50:36.015303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.917 [2024-12-07 11:50:36.015324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.917 [2024-12-07 11:50:36.015335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.917 [2024-12-07 11:50:36.015344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.917 [2024-12-07 11:50:36.015366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.917 qpair failed and we were unable to recover it. 00:38:36.917 [2024-12-07 11:50:36.025264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.917 [2024-12-07 11:50:36.025376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.917 [2024-12-07 11:50:36.025397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.917 [2024-12-07 11:50:36.025408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.917 [2024-12-07 11:50:36.025417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.917 [2024-12-07 11:50:36.025438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.917 qpair failed and we were unable to recover it. 00:38:36.917 [2024-12-07 11:50:36.035309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.917 [2024-12-07 11:50:36.035391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.917 [2024-12-07 11:50:36.035411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.917 [2024-12-07 11:50:36.035423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.917 [2024-12-07 11:50:36.035432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.917 [2024-12-07 11:50:36.035453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.917 qpair failed and we were unable to recover it. 00:38:36.917 [2024-12-07 11:50:36.045309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.917 [2024-12-07 11:50:36.045387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.917 [2024-12-07 11:50:36.045408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.917 [2024-12-07 11:50:36.045419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.917 [2024-12-07 11:50:36.045428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.917 [2024-12-07 11:50:36.045449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.917 qpair failed and we were unable to recover it. 00:38:36.917 [2024-12-07 11:50:36.055348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.917 [2024-12-07 11:50:36.055417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.917 [2024-12-07 11:50:36.055438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.917 [2024-12-07 11:50:36.055449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.917 [2024-12-07 11:50:36.055458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.917 [2024-12-07 11:50:36.055480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.917 qpair failed and we were unable to recover it. 00:38:36.917 [2024-12-07 11:50:36.065433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.917 [2024-12-07 11:50:36.065507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.917 [2024-12-07 11:50:36.065528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.917 [2024-12-07 11:50:36.065539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.917 [2024-12-07 11:50:36.065548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.917 [2024-12-07 11:50:36.065570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.917 qpair failed and we were unable to recover it. 00:38:36.917 [2024-12-07 11:50:36.075408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.917 [2024-12-07 11:50:36.075481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.918 [2024-12-07 11:50:36.075502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.918 [2024-12-07 11:50:36.075513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.918 [2024-12-07 11:50:36.075522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.918 [2024-12-07 11:50:36.075544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.918 qpair failed and we were unable to recover it. 00:38:36.918 [2024-12-07 11:50:36.085432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.918 [2024-12-07 11:50:36.085505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.918 [2024-12-07 11:50:36.085529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.918 [2024-12-07 11:50:36.085541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.918 [2024-12-07 11:50:36.085550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.918 [2024-12-07 11:50:36.085573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.918 qpair failed and we were unable to recover it. 00:38:36.918 [2024-12-07 11:50:36.095478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.918 [2024-12-07 11:50:36.095558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.918 [2024-12-07 11:50:36.095579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.918 [2024-12-07 11:50:36.095590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.918 [2024-12-07 11:50:36.095599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.918 [2024-12-07 11:50:36.095621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.918 qpair failed and we were unable to recover it. 00:38:36.918 [2024-12-07 11:50:36.105501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.918 [2024-12-07 11:50:36.105583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.918 [2024-12-07 11:50:36.105604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.918 [2024-12-07 11:50:36.105615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.918 [2024-12-07 11:50:36.105624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.918 [2024-12-07 11:50:36.105645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.918 qpair failed and we were unable to recover it. 00:38:36.918 [2024-12-07 11:50:36.115518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.918 [2024-12-07 11:50:36.115592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.918 [2024-12-07 11:50:36.115613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.918 [2024-12-07 11:50:36.115625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.918 [2024-12-07 11:50:36.115634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.918 [2024-12-07 11:50:36.115657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.918 qpair failed and we were unable to recover it. 00:38:36.918 [2024-12-07 11:50:36.125544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.918 [2024-12-07 11:50:36.125624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.918 [2024-12-07 11:50:36.125645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.918 [2024-12-07 11:50:36.125656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.918 [2024-12-07 11:50:36.125668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.918 [2024-12-07 11:50:36.125690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.918 qpair failed and we were unable to recover it. 00:38:36.918 [2024-12-07 11:50:36.135583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.918 [2024-12-07 11:50:36.135655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.918 [2024-12-07 11:50:36.135675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.918 [2024-12-07 11:50:36.135686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.918 [2024-12-07 11:50:36.135695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.918 [2024-12-07 11:50:36.135716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.918 qpair failed and we were unable to recover it. 00:38:36.918 [2024-12-07 11:50:36.145597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.918 [2024-12-07 11:50:36.145674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.918 [2024-12-07 11:50:36.145695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.918 [2024-12-07 11:50:36.145707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.918 [2024-12-07 11:50:36.145715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.918 [2024-12-07 11:50:36.145740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.918 qpair failed and we were unable to recover it. 00:38:36.918 [2024-12-07 11:50:36.155678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.918 [2024-12-07 11:50:36.155771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.918 [2024-12-07 11:50:36.155792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.918 [2024-12-07 11:50:36.155803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.918 [2024-12-07 11:50:36.155813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.918 [2024-12-07 11:50:36.155834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.918 qpair failed and we were unable to recover it. 00:38:36.918 [2024-12-07 11:50:36.165670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.918 [2024-12-07 11:50:36.165758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.918 [2024-12-07 11:50:36.165789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.918 [2024-12-07 11:50:36.165804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.918 [2024-12-07 11:50:36.165814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.918 [2024-12-07 11:50:36.165843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.918 qpair failed and we were unable to recover it. 00:38:36.918 [2024-12-07 11:50:36.175610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.918 [2024-12-07 11:50:36.175686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.918 [2024-12-07 11:50:36.175718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.919 [2024-12-07 11:50:36.175732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.919 [2024-12-07 11:50:36.175742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.919 [2024-12-07 11:50:36.175770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.919 qpair failed and we were unable to recover it. 00:38:36.919 [2024-12-07 11:50:36.185691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.919 [2024-12-07 11:50:36.185772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.919 [2024-12-07 11:50:36.185804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.919 [2024-12-07 11:50:36.185819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.919 [2024-12-07 11:50:36.185829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.919 [2024-12-07 11:50:36.185857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.919 qpair failed and we were unable to recover it. 00:38:36.919 [2024-12-07 11:50:36.195718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.919 [2024-12-07 11:50:36.195800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.919 [2024-12-07 11:50:36.195824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.919 [2024-12-07 11:50:36.195836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.919 [2024-12-07 11:50:36.195845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.919 [2024-12-07 11:50:36.195876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.919 qpair failed and we were unable to recover it. 00:38:36.919 [2024-12-07 11:50:36.205787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.919 [2024-12-07 11:50:36.205898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.919 [2024-12-07 11:50:36.205920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.919 [2024-12-07 11:50:36.205932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.919 [2024-12-07 11:50:36.205941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.919 [2024-12-07 11:50:36.205963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.919 qpair failed and we were unable to recover it. 00:38:36.919 [2024-12-07 11:50:36.215815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.919 [2024-12-07 11:50:36.215902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.919 [2024-12-07 11:50:36.215928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.919 [2024-12-07 11:50:36.215940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.919 [2024-12-07 11:50:36.215949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.919 [2024-12-07 11:50:36.215971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.919 qpair failed and we were unable to recover it. 00:38:36.919 [2024-12-07 11:50:36.225810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.919 [2024-12-07 11:50:36.225884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.919 [2024-12-07 11:50:36.225905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.919 [2024-12-07 11:50:36.225916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.919 [2024-12-07 11:50:36.225925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.919 [2024-12-07 11:50:36.225947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.919 qpair failed and we were unable to recover it. 00:38:36.919 [2024-12-07 11:50:36.235860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.919 [2024-12-07 11:50:36.235940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.919 [2024-12-07 11:50:36.235961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.919 [2024-12-07 11:50:36.235972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.919 [2024-12-07 11:50:36.235981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.919 [2024-12-07 11:50:36.236002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.919 qpair failed and we were unable to recover it. 00:38:36.919 [2024-12-07 11:50:36.245900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.919 [2024-12-07 11:50:36.245973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.919 [2024-12-07 11:50:36.245994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.919 [2024-12-07 11:50:36.246005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.919 [2024-12-07 11:50:36.246018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.919 [2024-12-07 11:50:36.246040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.919 qpair failed and we were unable to recover it. 00:38:36.919 [2024-12-07 11:50:36.255910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.919 [2024-12-07 11:50:36.255986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.919 [2024-12-07 11:50:36.256008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.919 [2024-12-07 11:50:36.256025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.919 [2024-12-07 11:50:36.256037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.919 [2024-12-07 11:50:36.256060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.919 qpair failed and we were unable to recover it. 00:38:36.919 [2024-12-07 11:50:36.265892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.919 [2024-12-07 11:50:36.265993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.919 [2024-12-07 11:50:36.266019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.919 [2024-12-07 11:50:36.266031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.919 [2024-12-07 11:50:36.266039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:36.919 [2024-12-07 11:50:36.266061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.919 qpair failed and we were unable to recover it. 00:38:37.183 [2024-12-07 11:50:36.275984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.183 [2024-12-07 11:50:36.276070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.183 [2024-12-07 11:50:36.276092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.183 [2024-12-07 11:50:36.276103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.183 [2024-12-07 11:50:36.276113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.183 [2024-12-07 11:50:36.276134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.183 qpair failed and we were unable to recover it. 00:38:37.183 [2024-12-07 11:50:36.286009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.183 [2024-12-07 11:50:36.286088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.183 [2024-12-07 11:50:36.286109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.183 [2024-12-07 11:50:36.286120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.184 [2024-12-07 11:50:36.286130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.184 [2024-12-07 11:50:36.286152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.184 qpair failed and we were unable to recover it. 00:38:37.184 [2024-12-07 11:50:36.295990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.184 [2024-12-07 11:50:36.296067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.184 [2024-12-07 11:50:36.296088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.184 [2024-12-07 11:50:36.296099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.184 [2024-12-07 11:50:36.296108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.184 [2024-12-07 11:50:36.296131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.184 qpair failed and we were unable to recover it. 00:38:37.184 [2024-12-07 11:50:36.306034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.184 [2024-12-07 11:50:36.306111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.184 [2024-12-07 11:50:36.306132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.184 [2024-12-07 11:50:36.306143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.184 [2024-12-07 11:50:36.306152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.184 [2024-12-07 11:50:36.306173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.184 qpair failed and we were unable to recover it. 00:38:37.184 [2024-12-07 11:50:36.316086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.184 [2024-12-07 11:50:36.316166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.184 [2024-12-07 11:50:36.316187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.184 [2024-12-07 11:50:36.316199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.184 [2024-12-07 11:50:36.316207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.184 [2024-12-07 11:50:36.316233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.184 qpair failed and we were unable to recover it. 00:38:37.184 [2024-12-07 11:50:36.326043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.184 [2024-12-07 11:50:36.326112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.184 [2024-12-07 11:50:36.326133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.184 [2024-12-07 11:50:36.326144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.184 [2024-12-07 11:50:36.326154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.184 [2024-12-07 11:50:36.326176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.184 qpair failed and we were unable to recover it. 00:38:37.184 [2024-12-07 11:50:36.336101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.184 [2024-12-07 11:50:36.336172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.184 [2024-12-07 11:50:36.336193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.184 [2024-12-07 11:50:36.336204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.184 [2024-12-07 11:50:36.336213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.184 [2024-12-07 11:50:36.336235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.184 qpair failed and we were unable to recover it. 00:38:37.184 [2024-12-07 11:50:36.346061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.184 [2024-12-07 11:50:36.346135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.184 [2024-12-07 11:50:36.346156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.184 [2024-12-07 11:50:36.346167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.184 [2024-12-07 11:50:36.346176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.184 [2024-12-07 11:50:36.346198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.184 qpair failed and we were unable to recover it. 00:38:37.184 [2024-12-07 11:50:36.356160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.184 [2024-12-07 11:50:36.356227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.184 [2024-12-07 11:50:36.356248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.184 [2024-12-07 11:50:36.356259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.184 [2024-12-07 11:50:36.356268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.184 [2024-12-07 11:50:36.356290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.184 qpair failed and we were unable to recover it. 00:38:37.184 [2024-12-07 11:50:36.366187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.184 [2024-12-07 11:50:36.366262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.184 [2024-12-07 11:50:36.366283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.184 [2024-12-07 11:50:36.366295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.184 [2024-12-07 11:50:36.366304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.184 [2024-12-07 11:50:36.366325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.184 qpair failed and we were unable to recover it. 00:38:37.184 [2024-12-07 11:50:36.376214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.184 [2024-12-07 11:50:36.376289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.184 [2024-12-07 11:50:36.376310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.184 [2024-12-07 11:50:36.376321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.184 [2024-12-07 11:50:36.376330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.184 [2024-12-07 11:50:36.376352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.184 qpair failed and we were unable to recover it. 00:38:37.184 [2024-12-07 11:50:36.386262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.184 [2024-12-07 11:50:36.386341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.184 [2024-12-07 11:50:36.386362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.184 [2024-12-07 11:50:36.386377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.184 [2024-12-07 11:50:36.386386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.185 [2024-12-07 11:50:36.386407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.185 qpair failed and we were unable to recover it. 00:38:37.185 [2024-12-07 11:50:36.396254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.185 [2024-12-07 11:50:36.396329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.185 [2024-12-07 11:50:36.396350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.185 [2024-12-07 11:50:36.396361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.185 [2024-12-07 11:50:36.396370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.185 [2024-12-07 11:50:36.396391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.185 qpair failed and we were unable to recover it. 00:38:37.185 [2024-12-07 11:50:36.406301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.185 [2024-12-07 11:50:36.406372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.185 [2024-12-07 11:50:36.406393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.185 [2024-12-07 11:50:36.406403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.185 [2024-12-07 11:50:36.406412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.185 [2024-12-07 11:50:36.406434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.185 qpair failed and we were unable to recover it. 00:38:37.185 [2024-12-07 11:50:36.416313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.185 [2024-12-07 11:50:36.416386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.185 [2024-12-07 11:50:36.416406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.185 [2024-12-07 11:50:36.416418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.185 [2024-12-07 11:50:36.416426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.185 [2024-12-07 11:50:36.416448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.185 qpair failed and we were unable to recover it. 00:38:37.185 [2024-12-07 11:50:36.426289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.185 [2024-12-07 11:50:36.426362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.185 [2024-12-07 11:50:36.426383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.185 [2024-12-07 11:50:36.426394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.185 [2024-12-07 11:50:36.426403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.185 [2024-12-07 11:50:36.426427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.185 qpair failed and we were unable to recover it. 00:38:37.185 [2024-12-07 11:50:36.436424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.185 [2024-12-07 11:50:36.436493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.185 [2024-12-07 11:50:36.436514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.185 [2024-12-07 11:50:36.436525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.185 [2024-12-07 11:50:36.436534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.185 [2024-12-07 11:50:36.436555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.185 qpair failed and we were unable to recover it. 00:38:37.185 [2024-12-07 11:50:36.446406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.185 [2024-12-07 11:50:36.446475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.185 [2024-12-07 11:50:36.446495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.185 [2024-12-07 11:50:36.446506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.185 [2024-12-07 11:50:36.446515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.185 [2024-12-07 11:50:36.446536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.185 qpair failed and we were unable to recover it. 00:38:37.185 [2024-12-07 11:50:36.456461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.185 [2024-12-07 11:50:36.456533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.185 [2024-12-07 11:50:36.456554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.185 [2024-12-07 11:50:36.456573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.185 [2024-12-07 11:50:36.456583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.185 [2024-12-07 11:50:36.456605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.185 qpair failed and we were unable to recover it. 00:38:37.185 [2024-12-07 11:50:36.466385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.185 [2024-12-07 11:50:36.466459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.185 [2024-12-07 11:50:36.466480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.185 [2024-12-07 11:50:36.466491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.185 [2024-12-07 11:50:36.466500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.185 [2024-12-07 11:50:36.466522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.185 qpair failed and we were unable to recover it. 00:38:37.185 [2024-12-07 11:50:36.476468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.185 [2024-12-07 11:50:36.476545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.185 [2024-12-07 11:50:36.476566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.185 [2024-12-07 11:50:36.476577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.185 [2024-12-07 11:50:36.476585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.185 [2024-12-07 11:50:36.476607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.185 qpair failed and we were unable to recover it. 00:38:37.185 [2024-12-07 11:50:36.486536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.185 [2024-12-07 11:50:36.486603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.186 [2024-12-07 11:50:36.486624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.186 [2024-12-07 11:50:36.486635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.186 [2024-12-07 11:50:36.486644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.186 [2024-12-07 11:50:36.486669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.186 qpair failed and we were unable to recover it. 00:38:37.186 [2024-12-07 11:50:36.496540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.186 [2024-12-07 11:50:36.496640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.186 [2024-12-07 11:50:36.496661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.186 [2024-12-07 11:50:36.496671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.186 [2024-12-07 11:50:36.496680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.186 [2024-12-07 11:50:36.496701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.186 qpair failed and we were unable to recover it. 00:38:37.186 [2024-12-07 11:50:36.506608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.186 [2024-12-07 11:50:36.506680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.186 [2024-12-07 11:50:36.506700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.186 [2024-12-07 11:50:36.506711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.186 [2024-12-07 11:50:36.506720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.186 [2024-12-07 11:50:36.506741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.186 qpair failed and we were unable to recover it. 00:38:37.186 [2024-12-07 11:50:36.516607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.186 [2024-12-07 11:50:36.516694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.186 [2024-12-07 11:50:36.516729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.186 [2024-12-07 11:50:36.516744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.186 [2024-12-07 11:50:36.516754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.186 [2024-12-07 11:50:36.516782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.186 qpair failed and we were unable to recover it. 00:38:37.186 [2024-12-07 11:50:36.526662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.186 [2024-12-07 11:50:36.526755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.186 [2024-12-07 11:50:36.526787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.186 [2024-12-07 11:50:36.526801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.186 [2024-12-07 11:50:36.526811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.186 [2024-12-07 11:50:36.526840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.186 qpair failed and we were unable to recover it. 00:38:37.448 [2024-12-07 11:50:36.536577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.448 [2024-12-07 11:50:36.536661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.448 [2024-12-07 11:50:36.536685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.448 [2024-12-07 11:50:36.536697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.448 [2024-12-07 11:50:36.536706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.448 [2024-12-07 11:50:36.536731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.448 qpair failed and we were unable to recover it. 00:38:37.448 [2024-12-07 11:50:36.546693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.448 [2024-12-07 11:50:36.546766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.448 [2024-12-07 11:50:36.546788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.448 [2024-12-07 11:50:36.546799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.449 [2024-12-07 11:50:36.546808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.449 [2024-12-07 11:50:36.546831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.449 qpair failed and we were unable to recover it. 00:38:37.449 [2024-12-07 11:50:36.556741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.449 [2024-12-07 11:50:36.556830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.449 [2024-12-07 11:50:36.556852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.449 [2024-12-07 11:50:36.556863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.449 [2024-12-07 11:50:36.556872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.449 [2024-12-07 11:50:36.556899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.449 qpair failed and we were unable to recover it. 00:38:37.449 [2024-12-07 11:50:36.566703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.449 [2024-12-07 11:50:36.566816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.449 [2024-12-07 11:50:36.566836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.449 [2024-12-07 11:50:36.566848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.449 [2024-12-07 11:50:36.566856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.449 [2024-12-07 11:50:36.566878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.449 qpair failed and we were unable to recover it. 00:38:37.449 [2024-12-07 11:50:36.576786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.449 [2024-12-07 11:50:36.576863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.449 [2024-12-07 11:50:36.576884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.449 [2024-12-07 11:50:36.576895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.449 [2024-12-07 11:50:36.576905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.449 [2024-12-07 11:50:36.576927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.449 qpair failed and we were unable to recover it. 00:38:37.449 [2024-12-07 11:50:36.586787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.449 [2024-12-07 11:50:36.586861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.449 [2024-12-07 11:50:36.586882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.449 [2024-12-07 11:50:36.586893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.449 [2024-12-07 11:50:36.586902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.449 [2024-12-07 11:50:36.586923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.449 qpair failed and we were unable to recover it. 00:38:37.449 [2024-12-07 11:50:36.596855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.449 [2024-12-07 11:50:36.596933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.449 [2024-12-07 11:50:36.596954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.449 [2024-12-07 11:50:36.596966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.449 [2024-12-07 11:50:36.596975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.449 [2024-12-07 11:50:36.596997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.449 qpair failed and we were unable to recover it. 00:38:37.449 [2024-12-07 11:50:36.606877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.449 [2024-12-07 11:50:36.606948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.449 [2024-12-07 11:50:36.606968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.449 [2024-12-07 11:50:36.606979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.449 [2024-12-07 11:50:36.606988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.449 [2024-12-07 11:50:36.607014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.449 qpair failed and we were unable to recover it. 00:38:37.449 [2024-12-07 11:50:36.616811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.449 [2024-12-07 11:50:36.616880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.449 [2024-12-07 11:50:36.616901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.449 [2024-12-07 11:50:36.616912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.449 [2024-12-07 11:50:36.616921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.449 [2024-12-07 11:50:36.616945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.449 qpair failed and we were unable to recover it. 00:38:37.449 [2024-12-07 11:50:36.626839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.449 [2024-12-07 11:50:36.626911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.449 [2024-12-07 11:50:36.626933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.449 [2024-12-07 11:50:36.626944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.449 [2024-12-07 11:50:36.626953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.449 [2024-12-07 11:50:36.626975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.449 qpair failed and we were unable to recover it. 00:38:37.449 [2024-12-07 11:50:36.636983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.449 [2024-12-07 11:50:36.637062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.449 [2024-12-07 11:50:36.637083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.449 [2024-12-07 11:50:36.637094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.449 [2024-12-07 11:50:36.637103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.449 [2024-12-07 11:50:36.637125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.449 qpair failed and we were unable to recover it. 00:38:37.449 [2024-12-07 11:50:36.646974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.449 [2024-12-07 11:50:36.647072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.449 [2024-12-07 11:50:36.647096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.449 [2024-12-07 11:50:36.647108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.449 [2024-12-07 11:50:36.647116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.449 [2024-12-07 11:50:36.647138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.449 qpair failed and we were unable to recover it. 00:38:37.449 [2024-12-07 11:50:36.657005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.449 [2024-12-07 11:50:36.657080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.449 [2024-12-07 11:50:36.657101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.449 [2024-12-07 11:50:36.657112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.449 [2024-12-07 11:50:36.657121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.449 [2024-12-07 11:50:36.657146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.449 qpair failed and we were unable to recover it. 00:38:37.449 [2024-12-07 11:50:36.667053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.449 [2024-12-07 11:50:36.667158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.449 [2024-12-07 11:50:36.667179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.449 [2024-12-07 11:50:36.667190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.449 [2024-12-07 11:50:36.667199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.449 [2024-12-07 11:50:36.667220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.449 qpair failed and we were unable to recover it. 00:38:37.449 [2024-12-07 11:50:36.677059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.450 [2024-12-07 11:50:36.677143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.450 [2024-12-07 11:50:36.677163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.450 [2024-12-07 11:50:36.677174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.450 [2024-12-07 11:50:36.677183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.450 [2024-12-07 11:50:36.677204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.450 qpair failed and we were unable to recover it. 00:38:37.450 [2024-12-07 11:50:36.687082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.450 [2024-12-07 11:50:36.687156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.450 [2024-12-07 11:50:36.687178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.450 [2024-12-07 11:50:36.687189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.450 [2024-12-07 11:50:36.687201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.450 [2024-12-07 11:50:36.687224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.450 qpair failed and we were unable to recover it. 00:38:37.450 [2024-12-07 11:50:36.697126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.450 [2024-12-07 11:50:36.697200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.450 [2024-12-07 11:50:36.697221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.450 [2024-12-07 11:50:36.697232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.450 [2024-12-07 11:50:36.697241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.450 [2024-12-07 11:50:36.697262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.450 qpair failed and we were unable to recover it. 00:38:37.450 [2024-12-07 11:50:36.707051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.450 [2024-12-07 11:50:36.707129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.450 [2024-12-07 11:50:36.707150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.450 [2024-12-07 11:50:36.707161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.450 [2024-12-07 11:50:36.707170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.450 [2024-12-07 11:50:36.707192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.450 qpair failed and we were unable to recover it. 00:38:37.450 [2024-12-07 11:50:36.717180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.450 [2024-12-07 11:50:36.717249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.450 [2024-12-07 11:50:36.717270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.450 [2024-12-07 11:50:36.717281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.450 [2024-12-07 11:50:36.717290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.450 [2024-12-07 11:50:36.717312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.450 qpair failed and we were unable to recover it. 00:38:37.450 [2024-12-07 11:50:36.727208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.450 [2024-12-07 11:50:36.727291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.450 [2024-12-07 11:50:36.727311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.450 [2024-12-07 11:50:36.727322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.450 [2024-12-07 11:50:36.727331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.450 [2024-12-07 11:50:36.727353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.450 qpair failed and we were unable to recover it. 00:38:37.450 [2024-12-07 11:50:36.737191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.450 [2024-12-07 11:50:36.737312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.450 [2024-12-07 11:50:36.737334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.450 [2024-12-07 11:50:36.737345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.450 [2024-12-07 11:50:36.737353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.450 [2024-12-07 11:50:36.737375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.450 qpair failed and we were unable to recover it. 00:38:37.450 [2024-12-07 11:50:36.747258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.450 [2024-12-07 11:50:36.747346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.450 [2024-12-07 11:50:36.747367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.450 [2024-12-07 11:50:36.747378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.450 [2024-12-07 11:50:36.747387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.450 [2024-12-07 11:50:36.747409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.450 qpair failed and we were unable to recover it. 00:38:37.450 [2024-12-07 11:50:36.757318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.450 [2024-12-07 11:50:36.757392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.450 [2024-12-07 11:50:36.757413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.450 [2024-12-07 11:50:36.757424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.450 [2024-12-07 11:50:36.757433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.450 [2024-12-07 11:50:36.757454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.450 qpair failed and we were unable to recover it. 00:38:37.450 [2024-12-07 11:50:36.767317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.450 [2024-12-07 11:50:36.767393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.450 [2024-12-07 11:50:36.767414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.450 [2024-12-07 11:50:36.767425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.450 [2024-12-07 11:50:36.767434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.450 [2024-12-07 11:50:36.767457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.450 qpair failed and we were unable to recover it. 00:38:37.450 [2024-12-07 11:50:36.777335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.450 [2024-12-07 11:50:36.777404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.450 [2024-12-07 11:50:36.777430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.450 [2024-12-07 11:50:36.777441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.450 [2024-12-07 11:50:36.777450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.450 [2024-12-07 11:50:36.777472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.450 qpair failed and we were unable to recover it. 00:38:37.450 [2024-12-07 11:50:36.787371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.450 [2024-12-07 11:50:36.787444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.450 [2024-12-07 11:50:36.787464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.450 [2024-12-07 11:50:36.787475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.450 [2024-12-07 11:50:36.787484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.450 [2024-12-07 11:50:36.787506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.450 qpair failed and we were unable to recover it. 00:38:37.450 [2024-12-07 11:50:36.797396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.450 [2024-12-07 11:50:36.797481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.450 [2024-12-07 11:50:36.797502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.450 [2024-12-07 11:50:36.797513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.450 [2024-12-07 11:50:36.797522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.450 [2024-12-07 11:50:36.797544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.450 qpair failed and we were unable to recover it. 00:38:37.713 [2024-12-07 11:50:36.807429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.714 [2024-12-07 11:50:36.807514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.714 [2024-12-07 11:50:36.807534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.714 [2024-12-07 11:50:36.807546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.714 [2024-12-07 11:50:36.807555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.714 [2024-12-07 11:50:36.807577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-12-07 11:50:36.817459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.714 [2024-12-07 11:50:36.817534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.714 [2024-12-07 11:50:36.817555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.714 [2024-12-07 11:50:36.817570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.714 [2024-12-07 11:50:36.817579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.714 [2024-12-07 11:50:36.817600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-12-07 11:50:36.827438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.714 [2024-12-07 11:50:36.827509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.714 [2024-12-07 11:50:36.827530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.714 [2024-12-07 11:50:36.827541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.714 [2024-12-07 11:50:36.827550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.714 [2024-12-07 11:50:36.827575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-12-07 11:50:36.837461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.714 [2024-12-07 11:50:36.837535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.714 [2024-12-07 11:50:36.837555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.714 [2024-12-07 11:50:36.837567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.714 [2024-12-07 11:50:36.837575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.714 [2024-12-07 11:50:36.837596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-12-07 11:50:36.847536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.714 [2024-12-07 11:50:36.847616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.714 [2024-12-07 11:50:36.847636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.714 [2024-12-07 11:50:36.847647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.714 [2024-12-07 11:50:36.847656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.714 [2024-12-07 11:50:36.847677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-12-07 11:50:36.857570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.714 [2024-12-07 11:50:36.857656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.714 [2024-12-07 11:50:36.857677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.714 [2024-12-07 11:50:36.857688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.714 [2024-12-07 11:50:36.857697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.714 [2024-12-07 11:50:36.857719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-12-07 11:50:36.867512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.714 [2024-12-07 11:50:36.867588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.714 [2024-12-07 11:50:36.867611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.714 [2024-12-07 11:50:36.867626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.714 [2024-12-07 11:50:36.867635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.714 [2024-12-07 11:50:36.867660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-12-07 11:50:36.877602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.714 [2024-12-07 11:50:36.877674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.714 [2024-12-07 11:50:36.877695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.714 [2024-12-07 11:50:36.877707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.714 [2024-12-07 11:50:36.877716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.714 [2024-12-07 11:50:36.877738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-12-07 11:50:36.887619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.714 [2024-12-07 11:50:36.887690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.714 [2024-12-07 11:50:36.887711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.714 [2024-12-07 11:50:36.887723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.714 [2024-12-07 11:50:36.887732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.714 [2024-12-07 11:50:36.887754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-12-07 11:50:36.897651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.714 [2024-12-07 11:50:36.897722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.714 [2024-12-07 11:50:36.897744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.714 [2024-12-07 11:50:36.897755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.714 [2024-12-07 11:50:36.897764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.714 [2024-12-07 11:50:36.897785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.714 qpair failed and we were unable to recover it. 00:38:37.714 [2024-12-07 11:50:36.907725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.715 [2024-12-07 11:50:36.907806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.715 [2024-12-07 11:50:36.907827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.715 [2024-12-07 11:50:36.907839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.715 [2024-12-07 11:50:36.907848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.715 [2024-12-07 11:50:36.907869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-12-07 11:50:36.917670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.715 [2024-12-07 11:50:36.917765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.715 [2024-12-07 11:50:36.917787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.715 [2024-12-07 11:50:36.917798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.715 [2024-12-07 11:50:36.917807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.715 [2024-12-07 11:50:36.917831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-12-07 11:50:36.927698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.715 [2024-12-07 11:50:36.927772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.715 [2024-12-07 11:50:36.927793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.715 [2024-12-07 11:50:36.927804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.715 [2024-12-07 11:50:36.927813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.715 [2024-12-07 11:50:36.927835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-12-07 11:50:36.937774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.715 [2024-12-07 11:50:36.937845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.715 [2024-12-07 11:50:36.937866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.715 [2024-12-07 11:50:36.937877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.715 [2024-12-07 11:50:36.937886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.715 [2024-12-07 11:50:36.937907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-12-07 11:50:36.947799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.715 [2024-12-07 11:50:36.947896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.715 [2024-12-07 11:50:36.947917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.715 [2024-12-07 11:50:36.947931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.715 [2024-12-07 11:50:36.947940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.715 [2024-12-07 11:50:36.947962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-12-07 11:50:36.957742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.715 [2024-12-07 11:50:36.957818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.715 [2024-12-07 11:50:36.957840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.715 [2024-12-07 11:50:36.957851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.715 [2024-12-07 11:50:36.957860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.715 [2024-12-07 11:50:36.957881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-12-07 11:50:36.967853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.715 [2024-12-07 11:50:36.967935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.715 [2024-12-07 11:50:36.967956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.715 [2024-12-07 11:50:36.967967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.715 [2024-12-07 11:50:36.967982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.715 [2024-12-07 11:50:36.968004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-12-07 11:50:36.977871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.715 [2024-12-07 11:50:36.977953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.715 [2024-12-07 11:50:36.977974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.715 [2024-12-07 11:50:36.977985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.715 [2024-12-07 11:50:36.977994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.715 [2024-12-07 11:50:36.978021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-12-07 11:50:36.987944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.715 [2024-12-07 11:50:36.988036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.715 [2024-12-07 11:50:36.988057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.715 [2024-12-07 11:50:36.988068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.715 [2024-12-07 11:50:36.988077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.715 [2024-12-07 11:50:36.988103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-12-07 11:50:36.997953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.715 [2024-12-07 11:50:36.998075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.715 [2024-12-07 11:50:36.998096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.715 [2024-12-07 11:50:36.998107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.715 [2024-12-07 11:50:36.998116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.715 [2024-12-07 11:50:36.998141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.715 qpair failed and we were unable to recover it. 00:38:37.715 [2024-12-07 11:50:37.007937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.715 [2024-12-07 11:50:37.008018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.715 [2024-12-07 11:50:37.008042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.716 [2024-12-07 11:50:37.008054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.716 [2024-12-07 11:50:37.008063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.716 [2024-12-07 11:50:37.008086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-12-07 11:50:37.017957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.716 [2024-12-07 11:50:37.018039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.716 [2024-12-07 11:50:37.018061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.716 [2024-12-07 11:50:37.018073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.716 [2024-12-07 11:50:37.018082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.716 [2024-12-07 11:50:37.018104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-12-07 11:50:37.027948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.716 [2024-12-07 11:50:37.028040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.716 [2024-12-07 11:50:37.028061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.716 [2024-12-07 11:50:37.028073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.716 [2024-12-07 11:50:37.028082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.716 [2024-12-07 11:50:37.028104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-12-07 11:50:37.038042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.716 [2024-12-07 11:50:37.038118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.716 [2024-12-07 11:50:37.038139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.716 [2024-12-07 11:50:37.038151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.716 [2024-12-07 11:50:37.038159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.716 [2024-12-07 11:50:37.038181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-12-07 11:50:37.048060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.716 [2024-12-07 11:50:37.048134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.716 [2024-12-07 11:50:37.048155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.716 [2024-12-07 11:50:37.048166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.716 [2024-12-07 11:50:37.048175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.716 [2024-12-07 11:50:37.048197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.716 [2024-12-07 11:50:37.058122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.716 [2024-12-07 11:50:37.058236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.716 [2024-12-07 11:50:37.058257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.716 [2024-12-07 11:50:37.058268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.716 [2024-12-07 11:50:37.058278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.716 [2024-12-07 11:50:37.058299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.716 qpair failed and we were unable to recover it. 00:38:37.985 [2024-12-07 11:50:37.068116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.985 [2024-12-07 11:50:37.068189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.985 [2024-12-07 11:50:37.068210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.985 [2024-12-07 11:50:37.068221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.985 [2024-12-07 11:50:37.068230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.985 [2024-12-07 11:50:37.068251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.985 qpair failed and we were unable to recover it. 00:38:37.985 [2024-12-07 11:50:37.078261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.985 [2024-12-07 11:50:37.078346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.985 [2024-12-07 11:50:37.078371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.985 [2024-12-07 11:50:37.078382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.985 [2024-12-07 11:50:37.078392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.985 [2024-12-07 11:50:37.078413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.985 qpair failed and we were unable to recover it. 00:38:37.985 [2024-12-07 11:50:37.088097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.985 [2024-12-07 11:50:37.088167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.985 [2024-12-07 11:50:37.088188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.985 [2024-12-07 11:50:37.088199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.985 [2024-12-07 11:50:37.088208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.985 [2024-12-07 11:50:37.088229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.985 qpair failed and we were unable to recover it. 00:38:37.985 [2024-12-07 11:50:37.098199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.985 [2024-12-07 11:50:37.098284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.985 [2024-12-07 11:50:37.098305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.985 [2024-12-07 11:50:37.098316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.985 [2024-12-07 11:50:37.098325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.985 [2024-12-07 11:50:37.098346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.985 qpair failed and we were unable to recover it. 00:38:37.985 [2024-12-07 11:50:37.108216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.985 [2024-12-07 11:50:37.108329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.985 [2024-12-07 11:50:37.108350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.985 [2024-12-07 11:50:37.108361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.985 [2024-12-07 11:50:37.108370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.985 [2024-12-07 11:50:37.108393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.985 qpair failed and we were unable to recover it. 00:38:37.985 [2024-12-07 11:50:37.118261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.985 [2024-12-07 11:50:37.118331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.985 [2024-12-07 11:50:37.118351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.985 [2024-12-07 11:50:37.118363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.985 [2024-12-07 11:50:37.118371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.985 [2024-12-07 11:50:37.118396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.985 qpair failed and we were unable to recover it. 00:38:37.985 [2024-12-07 11:50:37.128312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.985 [2024-12-07 11:50:37.128382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.985 [2024-12-07 11:50:37.128403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.986 [2024-12-07 11:50:37.128414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.986 [2024-12-07 11:50:37.128423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.986 [2024-12-07 11:50:37.128444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.986 qpair failed and we were unable to recover it. 00:38:37.986 [2024-12-07 11:50:37.138100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.986 [2024-12-07 11:50:37.138167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.986 [2024-12-07 11:50:37.138188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.986 [2024-12-07 11:50:37.138199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.986 [2024-12-07 11:50:37.138208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.986 [2024-12-07 11:50:37.138230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.986 qpair failed and we were unable to recover it. 00:38:37.986 [2024-12-07 11:50:37.148329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.986 [2024-12-07 11:50:37.148400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.986 [2024-12-07 11:50:37.148420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.986 [2024-12-07 11:50:37.148431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.986 [2024-12-07 11:50:37.148440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.986 [2024-12-07 11:50:37.148462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.986 qpair failed and we were unable to recover it. 00:38:37.986 [2024-12-07 11:50:37.158288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.986 [2024-12-07 11:50:37.158358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.986 [2024-12-07 11:50:37.158379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.986 [2024-12-07 11:50:37.158390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.986 [2024-12-07 11:50:37.158398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.986 [2024-12-07 11:50:37.158420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.986 qpair failed and we were unable to recover it. 00:38:37.986 [2024-12-07 11:50:37.168195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.986 [2024-12-07 11:50:37.168261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.986 [2024-12-07 11:50:37.168281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.986 [2024-12-07 11:50:37.168292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.986 [2024-12-07 11:50:37.168301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.986 [2024-12-07 11:50:37.168326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.986 qpair failed and we were unable to recover it. 00:38:37.986 [2024-12-07 11:50:37.178249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.986 [2024-12-07 11:50:37.178317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.986 [2024-12-07 11:50:37.178338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.986 [2024-12-07 11:50:37.178349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.986 [2024-12-07 11:50:37.178358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.986 [2024-12-07 11:50:37.178379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.986 qpair failed and we were unable to recover it. 00:38:37.986 [2024-12-07 11:50:37.188441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.986 [2024-12-07 11:50:37.188534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.986 [2024-12-07 11:50:37.188555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.986 [2024-12-07 11:50:37.188566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.986 [2024-12-07 11:50:37.188575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.986 [2024-12-07 11:50:37.188596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.986 qpair failed and we were unable to recover it. 00:38:37.986 [2024-12-07 11:50:37.198464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.986 [2024-12-07 11:50:37.198535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.986 [2024-12-07 11:50:37.198555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.986 [2024-12-07 11:50:37.198566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.986 [2024-12-07 11:50:37.198575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.986 [2024-12-07 11:50:37.198597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.986 qpair failed and we were unable to recover it. 00:38:37.986 [2024-12-07 11:50:37.208435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.986 [2024-12-07 11:50:37.208501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.986 [2024-12-07 11:50:37.208525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.986 [2024-12-07 11:50:37.208536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.986 [2024-12-07 11:50:37.208544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.986 [2024-12-07 11:50:37.208567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.986 qpair failed and we were unable to recover it. 00:38:37.986 [2024-12-07 11:50:37.218309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.986 [2024-12-07 11:50:37.218377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.986 [2024-12-07 11:50:37.218398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.986 [2024-12-07 11:50:37.218409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.987 [2024-12-07 11:50:37.218418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.987 [2024-12-07 11:50:37.218440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.987 qpair failed and we were unable to recover it. 00:38:37.987 [2024-12-07 11:50:37.228654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.987 [2024-12-07 11:50:37.228761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.987 [2024-12-07 11:50:37.228789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.987 [2024-12-07 11:50:37.228800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.987 [2024-12-07 11:50:37.228809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.987 [2024-12-07 11:50:37.228831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.987 qpair failed and we were unable to recover it. 00:38:37.987 [2024-12-07 11:50:37.238609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.987 [2024-12-07 11:50:37.238683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.987 [2024-12-07 11:50:37.238704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.987 [2024-12-07 11:50:37.238715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.987 [2024-12-07 11:50:37.238724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.987 [2024-12-07 11:50:37.238745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.987 qpair failed and we were unable to recover it. 00:38:37.987 [2024-12-07 11:50:37.248411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.987 [2024-12-07 11:50:37.248480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.987 [2024-12-07 11:50:37.248501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.987 [2024-12-07 11:50:37.248512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.987 [2024-12-07 11:50:37.248524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.987 [2024-12-07 11:50:37.248547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.987 qpair failed and we were unable to recover it. 00:38:37.987 [2024-12-07 11:50:37.258344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.987 [2024-12-07 11:50:37.258409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.987 [2024-12-07 11:50:37.258430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.987 [2024-12-07 11:50:37.258441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.987 [2024-12-07 11:50:37.258450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.987 [2024-12-07 11:50:37.258472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.987 qpair failed and we were unable to recover it. 00:38:37.987 [2024-12-07 11:50:37.268579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.987 [2024-12-07 11:50:37.268653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.987 [2024-12-07 11:50:37.268674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.987 [2024-12-07 11:50:37.268684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.987 [2024-12-07 11:50:37.268693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.987 [2024-12-07 11:50:37.268715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.987 qpair failed and we were unable to recover it. 00:38:37.987 [2024-12-07 11:50:37.278710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.987 [2024-12-07 11:50:37.278793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.987 [2024-12-07 11:50:37.278814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.987 [2024-12-07 11:50:37.278825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.987 [2024-12-07 11:50:37.278834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.987 [2024-12-07 11:50:37.278855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.987 qpair failed and we were unable to recover it. 00:38:37.987 [2024-12-07 11:50:37.288544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.987 [2024-12-07 11:50:37.288612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.987 [2024-12-07 11:50:37.288633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.987 [2024-12-07 11:50:37.288644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.987 [2024-12-07 11:50:37.288653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.987 [2024-12-07 11:50:37.288674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.987 qpair failed and we were unable to recover it. 00:38:37.987 [2024-12-07 11:50:37.298602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.987 [2024-12-07 11:50:37.298667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.987 [2024-12-07 11:50:37.298688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.987 [2024-12-07 11:50:37.298700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.987 [2024-12-07 11:50:37.298709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.987 [2024-12-07 11:50:37.298730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.987 qpair failed and we were unable to recover it. 00:38:37.987 [2024-12-07 11:50:37.308810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.987 [2024-12-07 11:50:37.308884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.987 [2024-12-07 11:50:37.308905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.987 [2024-12-07 11:50:37.308916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.987 [2024-12-07 11:50:37.308925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.987 [2024-12-07 11:50:37.308946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.988 qpair failed and we were unable to recover it. 00:38:37.988 [2024-12-07 11:50:37.318813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.988 [2024-12-07 11:50:37.318886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.988 [2024-12-07 11:50:37.318906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.988 [2024-12-07 11:50:37.318917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.988 [2024-12-07 11:50:37.318926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.988 [2024-12-07 11:50:37.318948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.988 qpair failed and we were unable to recover it. 00:38:37.988 [2024-12-07 11:50:37.328538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.988 [2024-12-07 11:50:37.328602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.988 [2024-12-07 11:50:37.328623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.988 [2024-12-07 11:50:37.328634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.988 [2024-12-07 11:50:37.328643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:37.988 [2024-12-07 11:50:37.328664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.988 qpair failed and we were unable to recover it. 00:38:38.324 [2024-12-07 11:50:37.338805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.324 [2024-12-07 11:50:37.338869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.324 [2024-12-07 11:50:37.338893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.324 [2024-12-07 11:50:37.338905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.324 [2024-12-07 11:50:37.338913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.324 [2024-12-07 11:50:37.338954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-12-07 11:50:37.348888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.324 [2024-12-07 11:50:37.348963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.324 [2024-12-07 11:50:37.348983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.324 [2024-12-07 11:50:37.348994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.324 [2024-12-07 11:50:37.349003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.324 [2024-12-07 11:50:37.349031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-12-07 11:50:37.358870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.324 [2024-12-07 11:50:37.358969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.324 [2024-12-07 11:50:37.358990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.324 [2024-12-07 11:50:37.359001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.324 [2024-12-07 11:50:37.359009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.324 [2024-12-07 11:50:37.359036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.324 qpair failed and we were unable to recover it. 00:38:38.324 [2024-12-07 11:50:37.368753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.325 [2024-12-07 11:50:37.368818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.325 [2024-12-07 11:50:37.368838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.325 [2024-12-07 11:50:37.368849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.325 [2024-12-07 11:50:37.368858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.325 [2024-12-07 11:50:37.368879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-12-07 11:50:37.378792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.325 [2024-12-07 11:50:37.378859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.325 [2024-12-07 11:50:37.378880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.325 [2024-12-07 11:50:37.378894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.325 [2024-12-07 11:50:37.378902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.325 [2024-12-07 11:50:37.378923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-12-07 11:50:37.389034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.325 [2024-12-07 11:50:37.389163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.325 [2024-12-07 11:50:37.389184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.325 [2024-12-07 11:50:37.389196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.325 [2024-12-07 11:50:37.389204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.325 [2024-12-07 11:50:37.389226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-12-07 11:50:37.399019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.325 [2024-12-07 11:50:37.399090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.325 [2024-12-07 11:50:37.399111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.325 [2024-12-07 11:50:37.399122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.325 [2024-12-07 11:50:37.399131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.325 [2024-12-07 11:50:37.399153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-12-07 11:50:37.408796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.325 [2024-12-07 11:50:37.408865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.325 [2024-12-07 11:50:37.408886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.325 [2024-12-07 11:50:37.408897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.325 [2024-12-07 11:50:37.408906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.325 [2024-12-07 11:50:37.408927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-12-07 11:50:37.418908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.325 [2024-12-07 11:50:37.418993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.325 [2024-12-07 11:50:37.419018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.325 [2024-12-07 11:50:37.419029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.325 [2024-12-07 11:50:37.419039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.325 [2024-12-07 11:50:37.419061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-12-07 11:50:37.429106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.325 [2024-12-07 11:50:37.429181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.325 [2024-12-07 11:50:37.429201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.325 [2024-12-07 11:50:37.429212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.325 [2024-12-07 11:50:37.429221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.325 [2024-12-07 11:50:37.429243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-12-07 11:50:37.439046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.325 [2024-12-07 11:50:37.439116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.325 [2024-12-07 11:50:37.439136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.325 [2024-12-07 11:50:37.439147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.325 [2024-12-07 11:50:37.439156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.325 [2024-12-07 11:50:37.439178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-12-07 11:50:37.448965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.325 [2024-12-07 11:50:37.449038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.325 [2024-12-07 11:50:37.449059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.325 [2024-12-07 11:50:37.449070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.325 [2024-12-07 11:50:37.449079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.325 [2024-12-07 11:50:37.449101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.325 qpair failed and we were unable to recover it. 00:38:38.325 [2024-12-07 11:50:37.458984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.325 [2024-12-07 11:50:37.459052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.325 [2024-12-07 11:50:37.459073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.325 [2024-12-07 11:50:37.459085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.325 [2024-12-07 11:50:37.459094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.325 [2024-12-07 11:50:37.459116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-12-07 11:50:37.469220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.326 [2024-12-07 11:50:37.469300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.326 [2024-12-07 11:50:37.469321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.326 [2024-12-07 11:50:37.469332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.326 [2024-12-07 11:50:37.469341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.326 [2024-12-07 11:50:37.469362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-12-07 11:50:37.479193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.326 [2024-12-07 11:50:37.479263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.326 [2024-12-07 11:50:37.479284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.326 [2024-12-07 11:50:37.479295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.326 [2024-12-07 11:50:37.479303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.326 [2024-12-07 11:50:37.479331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-12-07 11:50:37.489062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.326 [2024-12-07 11:50:37.489127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.326 [2024-12-07 11:50:37.489148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.326 [2024-12-07 11:50:37.489159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.326 [2024-12-07 11:50:37.489167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.326 [2024-12-07 11:50:37.489189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-12-07 11:50:37.499070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.326 [2024-12-07 11:50:37.499135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.326 [2024-12-07 11:50:37.499156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.326 [2024-12-07 11:50:37.499167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.326 [2024-12-07 11:50:37.499176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.326 [2024-12-07 11:50:37.499197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-12-07 11:50:37.509333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.326 [2024-12-07 11:50:37.509413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.326 [2024-12-07 11:50:37.509435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.326 [2024-12-07 11:50:37.509449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.326 [2024-12-07 11:50:37.509458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.326 [2024-12-07 11:50:37.509482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-12-07 11:50:37.519360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.326 [2024-12-07 11:50:37.519432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.326 [2024-12-07 11:50:37.519453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.326 [2024-12-07 11:50:37.519464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.326 [2024-12-07 11:50:37.519472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.326 [2024-12-07 11:50:37.519494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-12-07 11:50:37.529201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.326 [2024-12-07 11:50:37.529267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.326 [2024-12-07 11:50:37.529287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.326 [2024-12-07 11:50:37.529298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.326 [2024-12-07 11:50:37.529307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.326 [2024-12-07 11:50:37.529329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-12-07 11:50:37.539207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.326 [2024-12-07 11:50:37.539270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.326 [2024-12-07 11:50:37.539291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.326 [2024-12-07 11:50:37.539302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.326 [2024-12-07 11:50:37.539310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.326 [2024-12-07 11:50:37.539331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-12-07 11:50:37.549473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.326 [2024-12-07 11:50:37.549548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.326 [2024-12-07 11:50:37.549569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.326 [2024-12-07 11:50:37.549580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.326 [2024-12-07 11:50:37.549589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.326 [2024-12-07 11:50:37.549615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-12-07 11:50:37.559483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.326 [2024-12-07 11:50:37.559552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.326 [2024-12-07 11:50:37.559573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.326 [2024-12-07 11:50:37.559584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.326 [2024-12-07 11:50:37.559592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.326 [2024-12-07 11:50:37.559613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.326 qpair failed and we were unable to recover it. 00:38:38.326 [2024-12-07 11:50:37.569326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.327 [2024-12-07 11:50:37.569392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.327 [2024-12-07 11:50:37.569413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.327 [2024-12-07 11:50:37.569424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.327 [2024-12-07 11:50:37.569433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.327 [2024-12-07 11:50:37.569454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-12-07 11:50:37.579320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.327 [2024-12-07 11:50:37.579389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.327 [2024-12-07 11:50:37.579410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.327 [2024-12-07 11:50:37.579421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.327 [2024-12-07 11:50:37.579430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.327 [2024-12-07 11:50:37.579451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-12-07 11:50:37.589563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.327 [2024-12-07 11:50:37.589637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.327 [2024-12-07 11:50:37.589657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.327 [2024-12-07 11:50:37.589668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.327 [2024-12-07 11:50:37.589677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.327 [2024-12-07 11:50:37.589699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-12-07 11:50:37.599586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.327 [2024-12-07 11:50:37.599665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.327 [2024-12-07 11:50:37.599686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.327 [2024-12-07 11:50:37.599697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.327 [2024-12-07 11:50:37.599706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.327 [2024-12-07 11:50:37.599728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-12-07 11:50:37.609338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.327 [2024-12-07 11:50:37.609406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.327 [2024-12-07 11:50:37.609426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.327 [2024-12-07 11:50:37.609437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.327 [2024-12-07 11:50:37.609447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.327 [2024-12-07 11:50:37.609468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-12-07 11:50:37.619370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.327 [2024-12-07 11:50:37.619436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.327 [2024-12-07 11:50:37.619457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.327 [2024-12-07 11:50:37.619468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.327 [2024-12-07 11:50:37.619476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.327 [2024-12-07 11:50:37.619498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-12-07 11:50:37.629645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.327 [2024-12-07 11:50:37.629719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.327 [2024-12-07 11:50:37.629740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.327 [2024-12-07 11:50:37.629751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.327 [2024-12-07 11:50:37.629760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.327 [2024-12-07 11:50:37.629781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.327 [2024-12-07 11:50:37.639738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.327 [2024-12-07 11:50:37.639817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.327 [2024-12-07 11:50:37.639841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.327 [2024-12-07 11:50:37.639852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.327 [2024-12-07 11:50:37.639861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.327 [2024-12-07 11:50:37.639882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.327 qpair failed and we were unable to recover it. 00:38:38.611 [2024-12-07 11:50:37.649523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.612 [2024-12-07 11:50:37.649595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.612 [2024-12-07 11:50:37.649616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.612 [2024-12-07 11:50:37.649627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.612 [2024-12-07 11:50:37.649636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.612 [2024-12-07 11:50:37.649657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.612 qpair failed and we were unable to recover it. 00:38:38.612 [2024-12-07 11:50:37.659530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.612 [2024-12-07 11:50:37.659601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.612 [2024-12-07 11:50:37.659621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.612 [2024-12-07 11:50:37.659633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.612 [2024-12-07 11:50:37.659642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.612 [2024-12-07 11:50:37.659663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.612 qpair failed and we were unable to recover it. 00:38:38.612 [2024-12-07 11:50:37.669817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.612 [2024-12-07 11:50:37.669892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.612 [2024-12-07 11:50:37.669913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.612 [2024-12-07 11:50:37.669924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.612 [2024-12-07 11:50:37.669932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.612 [2024-12-07 11:50:37.669954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.612 qpair failed and we were unable to recover it. 00:38:38.612 [2024-12-07 11:50:37.679836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.612 [2024-12-07 11:50:37.679919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.612 [2024-12-07 11:50:37.679939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.612 [2024-12-07 11:50:37.679950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.612 [2024-12-07 11:50:37.679962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.612 [2024-12-07 11:50:37.679987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.612 qpair failed and we were unable to recover it. 00:38:38.612 [2024-12-07 11:50:37.689693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.612 [2024-12-07 11:50:37.689760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.612 [2024-12-07 11:50:37.689781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.612 [2024-12-07 11:50:37.689792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.612 [2024-12-07 11:50:37.689801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.612 [2024-12-07 11:50:37.689823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.612 qpair failed and we were unable to recover it. 00:38:38.612 [2024-12-07 11:50:37.699683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.612 [2024-12-07 11:50:37.699749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.612 [2024-12-07 11:50:37.699770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.612 [2024-12-07 11:50:37.699781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.612 [2024-12-07 11:50:37.699789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.612 [2024-12-07 11:50:37.699811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.612 qpair failed and we were unable to recover it. 00:38:38.612 [2024-12-07 11:50:37.709916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.612 [2024-12-07 11:50:37.709988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.612 [2024-12-07 11:50:37.710009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.612 [2024-12-07 11:50:37.710026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.612 [2024-12-07 11:50:37.710035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.612 [2024-12-07 11:50:37.710057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.612 qpair failed and we were unable to recover it. 00:38:38.612 [2024-12-07 11:50:37.719972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.612 [2024-12-07 11:50:37.720054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.612 [2024-12-07 11:50:37.720075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.612 [2024-12-07 11:50:37.720086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.612 [2024-12-07 11:50:37.720095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.612 [2024-12-07 11:50:37.720117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.612 qpair failed and we were unable to recover it. 00:38:38.612 [2024-12-07 11:50:37.729790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.612 [2024-12-07 11:50:37.729863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.612 [2024-12-07 11:50:37.729885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.612 [2024-12-07 11:50:37.729896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.612 [2024-12-07 11:50:37.729904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.612 [2024-12-07 11:50:37.729926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.612 qpair failed and we were unable to recover it. 00:38:38.612 [2024-12-07 11:50:37.739781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.612 [2024-12-07 11:50:37.739845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.612 [2024-12-07 11:50:37.739866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.612 [2024-12-07 11:50:37.739883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.612 [2024-12-07 11:50:37.739892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.612 [2024-12-07 11:50:37.739914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.613 qpair failed and we were unable to recover it. 00:38:38.613 [2024-12-07 11:50:37.750033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.613 [2024-12-07 11:50:37.750107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.613 [2024-12-07 11:50:37.750128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.613 [2024-12-07 11:50:37.750140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.613 [2024-12-07 11:50:37.750149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.613 [2024-12-07 11:50:37.750170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.613 qpair failed and we were unable to recover it. 00:38:38.613 [2024-12-07 11:50:37.760062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.613 [2024-12-07 11:50:37.760139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.613 [2024-12-07 11:50:37.760160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.613 [2024-12-07 11:50:37.760172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.613 [2024-12-07 11:50:37.760180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.613 [2024-12-07 11:50:37.760202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.613 qpair failed and we were unable to recover it. 00:38:38.613 [2024-12-07 11:50:37.769809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.613 [2024-12-07 11:50:37.769876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.613 [2024-12-07 11:50:37.769900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.613 [2024-12-07 11:50:37.769911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.613 [2024-12-07 11:50:37.769920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.613 [2024-12-07 11:50:37.769941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.613 qpair failed and we were unable to recover it. 00:38:38.613 [2024-12-07 11:50:37.779914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.613 [2024-12-07 11:50:37.779980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.613 [2024-12-07 11:50:37.780001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.613 [2024-12-07 11:50:37.780016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.613 [2024-12-07 11:50:37.780025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.613 [2024-12-07 11:50:37.780047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.613 qpair failed and we were unable to recover it. 00:38:38.613 [2024-12-07 11:50:37.790148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.613 [2024-12-07 11:50:37.790220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.613 [2024-12-07 11:50:37.790241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.613 [2024-12-07 11:50:37.790252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.613 [2024-12-07 11:50:37.790261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.613 [2024-12-07 11:50:37.790282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.613 qpair failed and we were unable to recover it. 00:38:38.613 [2024-12-07 11:50:37.800162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.613 [2024-12-07 11:50:37.800237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.613 [2024-12-07 11:50:37.800258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.613 [2024-12-07 11:50:37.800269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.613 [2024-12-07 11:50:37.800278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.613 [2024-12-07 11:50:37.800299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.613 qpair failed and we were unable to recover it. 00:38:38.613 [2024-12-07 11:50:37.809984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.613 [2024-12-07 11:50:37.810058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.613 [2024-12-07 11:50:37.810079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.613 [2024-12-07 11:50:37.810090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.613 [2024-12-07 11:50:37.810102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.613 [2024-12-07 11:50:37.810124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.613 qpair failed and we were unable to recover it. 00:38:38.613 [2024-12-07 11:50:37.820056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.613 [2024-12-07 11:50:37.820134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.613 [2024-12-07 11:50:37.820155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.613 [2024-12-07 11:50:37.820166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.613 [2024-12-07 11:50:37.820175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.613 [2024-12-07 11:50:37.820197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.613 qpair failed and we were unable to recover it. 00:38:38.613 [2024-12-07 11:50:37.830272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.613 [2024-12-07 11:50:37.830347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.613 [2024-12-07 11:50:37.830368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.613 [2024-12-07 11:50:37.830378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.613 [2024-12-07 11:50:37.830387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.613 [2024-12-07 11:50:37.830409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.613 qpair failed and we were unable to recover it. 00:38:38.613 [2024-12-07 11:50:37.840301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.613 [2024-12-07 11:50:37.840390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.613 [2024-12-07 11:50:37.840412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.613 [2024-12-07 11:50:37.840424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.614 [2024-12-07 11:50:37.840433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.614 [2024-12-07 11:50:37.840455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.614 qpair failed and we were unable to recover it. 00:38:38.614 [2024-12-07 11:50:37.850092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.614 [2024-12-07 11:50:37.850165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.614 [2024-12-07 11:50:37.850186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.614 [2024-12-07 11:50:37.850198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.614 [2024-12-07 11:50:37.850207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.614 [2024-12-07 11:50:37.850232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.614 qpair failed and we were unable to recover it. 00:38:38.614 [2024-12-07 11:50:37.860121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.614 [2024-12-07 11:50:37.860196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.614 [2024-12-07 11:50:37.860217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.614 [2024-12-07 11:50:37.860228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.614 [2024-12-07 11:50:37.860237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.614 [2024-12-07 11:50:37.860259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.614 qpair failed and we were unable to recover it. 00:38:38.614 [2024-12-07 11:50:37.870364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.614 [2024-12-07 11:50:37.870438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.614 [2024-12-07 11:50:37.870458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.614 [2024-12-07 11:50:37.870470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.614 [2024-12-07 11:50:37.870479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.614 [2024-12-07 11:50:37.870501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.614 qpair failed and we were unable to recover it. 00:38:38.614 [2024-12-07 11:50:37.880367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.614 [2024-12-07 11:50:37.880438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.614 [2024-12-07 11:50:37.880460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.614 [2024-12-07 11:50:37.880471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.614 [2024-12-07 11:50:37.880480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.614 [2024-12-07 11:50:37.880502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.614 qpair failed and we were unable to recover it. 00:38:38.614 [2024-12-07 11:50:37.890210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.614 [2024-12-07 11:50:37.890274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.614 [2024-12-07 11:50:37.890295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.614 [2024-12-07 11:50:37.890306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.614 [2024-12-07 11:50:37.890315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.614 [2024-12-07 11:50:37.890336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.614 qpair failed and we were unable to recover it. 00:38:38.614 [2024-12-07 11:50:37.900252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.614 [2024-12-07 11:50:37.900322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.614 [2024-12-07 11:50:37.900346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.614 [2024-12-07 11:50:37.900358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.614 [2024-12-07 11:50:37.900367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.614 [2024-12-07 11:50:37.900389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.614 qpair failed and we were unable to recover it. 00:38:38.614 [2024-12-07 11:50:37.910359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.614 [2024-12-07 11:50:37.910434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.614 [2024-12-07 11:50:37.910455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.614 [2024-12-07 11:50:37.910466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.614 [2024-12-07 11:50:37.910475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.614 [2024-12-07 11:50:37.910496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.614 qpair failed and we were unable to recover it. 00:38:38.614 [2024-12-07 11:50:37.920493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.614 [2024-12-07 11:50:37.920567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.614 [2024-12-07 11:50:37.920588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.614 [2024-12-07 11:50:37.920599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.614 [2024-12-07 11:50:37.920608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.614 [2024-12-07 11:50:37.920629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.614 qpair failed and we were unable to recover it. 00:38:38.614 [2024-12-07 11:50:37.930306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.614 [2024-12-07 11:50:37.930373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.614 [2024-12-07 11:50:37.930394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.614 [2024-12-07 11:50:37.930405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.614 [2024-12-07 11:50:37.930414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.614 [2024-12-07 11:50:37.930435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.614 qpair failed and we were unable to recover it. 00:38:38.614 [2024-12-07 11:50:37.940344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.614 [2024-12-07 11:50:37.940412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.614 [2024-12-07 11:50:37.940434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.614 [2024-12-07 11:50:37.940449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.614 [2024-12-07 11:50:37.940458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.615 [2024-12-07 11:50:37.940479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.615 qpair failed and we were unable to recover it. 00:38:38.615 [2024-12-07 11:50:37.950488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.615 [2024-12-07 11:50:37.950559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.615 [2024-12-07 11:50:37.950580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.615 [2024-12-07 11:50:37.950591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.615 [2024-12-07 11:50:37.950600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.615 [2024-12-07 11:50:37.950623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.615 qpair failed and we were unable to recover it. 00:38:38.615 [2024-12-07 11:50:37.960627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.615 [2024-12-07 11:50:37.960701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.615 [2024-12-07 11:50:37.960721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.615 [2024-12-07 11:50:37.960732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.615 [2024-12-07 11:50:37.960741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.615 [2024-12-07 11:50:37.960763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.615 qpair failed and we were unable to recover it. 00:38:38.878 [2024-12-07 11:50:37.970425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.878 [2024-12-07 11:50:37.970514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.878 [2024-12-07 11:50:37.970534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.878 [2024-12-07 11:50:37.970546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.878 [2024-12-07 11:50:37.970555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.878 [2024-12-07 11:50:37.970577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.878 qpair failed and we were unable to recover it. 00:38:38.878 [2024-12-07 11:50:37.980444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.878 [2024-12-07 11:50:37.980516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.878 [2024-12-07 11:50:37.980537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.878 [2024-12-07 11:50:37.980548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.878 [2024-12-07 11:50:37.980557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.878 [2024-12-07 11:50:37.980579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.878 qpair failed and we were unable to recover it. 00:38:38.878 [2024-12-07 11:50:37.990749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.878 [2024-12-07 11:50:37.990830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.878 [2024-12-07 11:50:37.990851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.878 [2024-12-07 11:50:37.990862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.878 [2024-12-07 11:50:37.990871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.878 [2024-12-07 11:50:37.990893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.878 qpair failed and we were unable to recover it. 00:38:38.878 [2024-12-07 11:50:38.000714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.878 [2024-12-07 11:50:38.000790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.878 [2024-12-07 11:50:38.000812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.878 [2024-12-07 11:50:38.000823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.879 [2024-12-07 11:50:38.000832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.879 [2024-12-07 11:50:38.000855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.879 qpair failed and we were unable to recover it. 00:38:38.879 [2024-12-07 11:50:38.010547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.879 [2024-12-07 11:50:38.010619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.879 [2024-12-07 11:50:38.010642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.879 [2024-12-07 11:50:38.010654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.879 [2024-12-07 11:50:38.010663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.879 [2024-12-07 11:50:38.010686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.879 qpair failed and we were unable to recover it. 00:38:38.879 [2024-12-07 11:50:38.020559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.879 [2024-12-07 11:50:38.020632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.879 [2024-12-07 11:50:38.020654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.879 [2024-12-07 11:50:38.020666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.879 [2024-12-07 11:50:38.020674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.879 [2024-12-07 11:50:38.020699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.879 qpair failed and we were unable to recover it. 00:38:38.879 [2024-12-07 11:50:38.030724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.879 [2024-12-07 11:50:38.030803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.879 [2024-12-07 11:50:38.030825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.879 [2024-12-07 11:50:38.030835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.879 [2024-12-07 11:50:38.030844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.879 [2024-12-07 11:50:38.030866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.879 qpair failed and we were unable to recover it. 00:38:38.879 [2024-12-07 11:50:38.040781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.879 [2024-12-07 11:50:38.040856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.879 [2024-12-07 11:50:38.040877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.879 [2024-12-07 11:50:38.040888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.879 [2024-12-07 11:50:38.040896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.879 [2024-12-07 11:50:38.040918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.879 qpair failed and we were unable to recover it. 00:38:38.879 [2024-12-07 11:50:38.050636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.879 [2024-12-07 11:50:38.050701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.879 [2024-12-07 11:50:38.050722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.879 [2024-12-07 11:50:38.050733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.879 [2024-12-07 11:50:38.050741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.879 [2024-12-07 11:50:38.050763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.879 qpair failed and we were unable to recover it. 00:38:38.879 [2024-12-07 11:50:38.060661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.879 [2024-12-07 11:50:38.060728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.879 [2024-12-07 11:50:38.060749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.879 [2024-12-07 11:50:38.060760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.879 [2024-12-07 11:50:38.060769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.879 [2024-12-07 11:50:38.060791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.879 qpair failed and we were unable to recover it. 00:38:38.879 [2024-12-07 11:50:38.070790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.879 [2024-12-07 11:50:38.070864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.879 [2024-12-07 11:50:38.070884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.879 [2024-12-07 11:50:38.070898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.879 [2024-12-07 11:50:38.070908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.879 [2024-12-07 11:50:38.070929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.879 qpair failed and we were unable to recover it. 00:38:38.879 [2024-12-07 11:50:38.080946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.879 [2024-12-07 11:50:38.081038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.879 [2024-12-07 11:50:38.081059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.879 [2024-12-07 11:50:38.081070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.879 [2024-12-07 11:50:38.081080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.879 [2024-12-07 11:50:38.081102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.879 qpair failed and we were unable to recover it. 00:38:38.879 [2024-12-07 11:50:38.090740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.879 [2024-12-07 11:50:38.090847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.879 [2024-12-07 11:50:38.090868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.879 [2024-12-07 11:50:38.090879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.879 [2024-12-07 11:50:38.090888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.879 [2024-12-07 11:50:38.090909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.879 qpair failed and we were unable to recover it. 00:38:38.879 [2024-12-07 11:50:38.100786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.879 [2024-12-07 11:50:38.100855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.879 [2024-12-07 11:50:38.100877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.879 [2024-12-07 11:50:38.100891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.879 [2024-12-07 11:50:38.100900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.879 [2024-12-07 11:50:38.100922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.880 qpair failed and we were unable to recover it. 00:38:38.880 [2024-12-07 11:50:38.110951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.880 [2024-12-07 11:50:38.111031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.880 [2024-12-07 11:50:38.111052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.880 [2024-12-07 11:50:38.111063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.880 [2024-12-07 11:50:38.111072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.880 [2024-12-07 11:50:38.111099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.880 qpair failed and we were unable to recover it. 00:38:38.880 [2024-12-07 11:50:38.121057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.880 [2024-12-07 11:50:38.121133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.880 [2024-12-07 11:50:38.121155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.880 [2024-12-07 11:50:38.121166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.880 [2024-12-07 11:50:38.121174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.880 [2024-12-07 11:50:38.121196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.880 qpair failed and we were unable to recover it. 00:38:38.880 [2024-12-07 11:50:38.130875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.880 [2024-12-07 11:50:38.130943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.880 [2024-12-07 11:50:38.130964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.880 [2024-12-07 11:50:38.130974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.880 [2024-12-07 11:50:38.130983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.880 [2024-12-07 11:50:38.131005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.880 qpair failed and we were unable to recover it. 00:38:38.880 [2024-12-07 11:50:38.140881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.880 [2024-12-07 11:50:38.140946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.880 [2024-12-07 11:50:38.140966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.880 [2024-12-07 11:50:38.140977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.880 [2024-12-07 11:50:38.140986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.880 [2024-12-07 11:50:38.141007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.880 qpair failed and we were unable to recover it. 00:38:38.880 [2024-12-07 11:50:38.151111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.880 [2024-12-07 11:50:38.151201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.880 [2024-12-07 11:50:38.151221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.880 [2024-12-07 11:50:38.151232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.880 [2024-12-07 11:50:38.151241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.880 [2024-12-07 11:50:38.151263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.880 qpair failed and we were unable to recover it. 00:38:38.880 [2024-12-07 11:50:38.161068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.880 [2024-12-07 11:50:38.161142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.880 [2024-12-07 11:50:38.161163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.880 [2024-12-07 11:50:38.161174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.880 [2024-12-07 11:50:38.161183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.880 [2024-12-07 11:50:38.161205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.880 qpair failed and we were unable to recover it. 00:38:38.880 [2024-12-07 11:50:38.170986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.880 [2024-12-07 11:50:38.171059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.880 [2024-12-07 11:50:38.171080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.880 [2024-12-07 11:50:38.171091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.880 [2024-12-07 11:50:38.171100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.880 [2024-12-07 11:50:38.171121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.880 qpair failed and we were unable to recover it. 00:38:38.880 [2024-12-07 11:50:38.180991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.880 [2024-12-07 11:50:38.181061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.880 [2024-12-07 11:50:38.181082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.880 [2024-12-07 11:50:38.181093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.880 [2024-12-07 11:50:38.181102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.880 [2024-12-07 11:50:38.181124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.880 qpair failed and we were unable to recover it. 00:38:38.880 [2024-12-07 11:50:38.191262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.880 [2024-12-07 11:50:38.191347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.880 [2024-12-07 11:50:38.191368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.880 [2024-12-07 11:50:38.191379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.880 [2024-12-07 11:50:38.191388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.880 [2024-12-07 11:50:38.191413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.880 qpair failed and we were unable to recover it. 00:38:38.880 [2024-12-07 11:50:38.201215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.880 [2024-12-07 11:50:38.201284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.880 [2024-12-07 11:50:38.201309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.880 [2024-12-07 11:50:38.201320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.880 [2024-12-07 11:50:38.201329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.881 [2024-12-07 11:50:38.201353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.881 qpair failed and we were unable to recover it. 00:38:38.881 [2024-12-07 11:50:38.211015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.881 [2024-12-07 11:50:38.211099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.881 [2024-12-07 11:50:38.211120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.881 [2024-12-07 11:50:38.211131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.881 [2024-12-07 11:50:38.211139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.881 [2024-12-07 11:50:38.211161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.881 qpair failed and we were unable to recover it. 00:38:38.881 [2024-12-07 11:50:38.221116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.881 [2024-12-07 11:50:38.221181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.881 [2024-12-07 11:50:38.221202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.881 [2024-12-07 11:50:38.221213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.881 [2024-12-07 11:50:38.221221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:38.881 [2024-12-07 11:50:38.221243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:38.881 qpair failed and we were unable to recover it. 00:38:39.144 [2024-12-07 11:50:38.231247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.144 [2024-12-07 11:50:38.231343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.144 [2024-12-07 11:50:38.231363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.144 [2024-12-07 11:50:38.231374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.144 [2024-12-07 11:50:38.231383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.144 [2024-12-07 11:50:38.231405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.144 qpair failed and we were unable to recover it. 00:38:39.144 [2024-12-07 11:50:38.241364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.144 [2024-12-07 11:50:38.241448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.144 [2024-12-07 11:50:38.241469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.144 [2024-12-07 11:50:38.241480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.144 [2024-12-07 11:50:38.241493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.144 [2024-12-07 11:50:38.241515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.144 qpair failed and we were unable to recover it. 00:38:39.144 [2024-12-07 11:50:38.251234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.144 [2024-12-07 11:50:38.251300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.144 [2024-12-07 11:50:38.251321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.144 [2024-12-07 11:50:38.251333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.144 [2024-12-07 11:50:38.251347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.144 [2024-12-07 11:50:38.251369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.144 qpair failed and we were unable to recover it. 00:38:39.144 [2024-12-07 11:50:38.261199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.144 [2024-12-07 11:50:38.261295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.144 [2024-12-07 11:50:38.261316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.144 [2024-12-07 11:50:38.261327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.144 [2024-12-07 11:50:38.261336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.144 [2024-12-07 11:50:38.261357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.144 qpair failed and we were unable to recover it. 00:38:39.144 [2024-12-07 11:50:38.271443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.144 [2024-12-07 11:50:38.271527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.271548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.145 [2024-12-07 11:50:38.271559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.145 [2024-12-07 11:50:38.271568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.145 [2024-12-07 11:50:38.271590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.145 qpair failed and we were unable to recover it. 00:38:39.145 [2024-12-07 11:50:38.281459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.145 [2024-12-07 11:50:38.281534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.281554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.145 [2024-12-07 11:50:38.281565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.145 [2024-12-07 11:50:38.281574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.145 [2024-12-07 11:50:38.281595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.145 qpair failed and we were unable to recover it. 00:38:39.145 [2024-12-07 11:50:38.291293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.145 [2024-12-07 11:50:38.291358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.291379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.145 [2024-12-07 11:50:38.291390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.145 [2024-12-07 11:50:38.291399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.145 [2024-12-07 11:50:38.291421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.145 qpair failed and we were unable to recover it. 00:38:39.145 [2024-12-07 11:50:38.301347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.145 [2024-12-07 11:50:38.301414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.301435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.145 [2024-12-07 11:50:38.301446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.145 [2024-12-07 11:50:38.301455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.145 [2024-12-07 11:50:38.301476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.145 qpair failed and we were unable to recover it. 00:38:39.145 [2024-12-07 11:50:38.311491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.145 [2024-12-07 11:50:38.311574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.311594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.145 [2024-12-07 11:50:38.311605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.145 [2024-12-07 11:50:38.311614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.145 [2024-12-07 11:50:38.311636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.145 qpair failed and we were unable to recover it. 00:38:39.145 [2024-12-07 11:50:38.321599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.145 [2024-12-07 11:50:38.321668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.321689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.145 [2024-12-07 11:50:38.321700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.145 [2024-12-07 11:50:38.321710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.145 [2024-12-07 11:50:38.321731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.145 qpair failed and we were unable to recover it. 00:38:39.145 [2024-12-07 11:50:38.331415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.145 [2024-12-07 11:50:38.331484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.331509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.145 [2024-12-07 11:50:38.331521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.145 [2024-12-07 11:50:38.331530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.145 [2024-12-07 11:50:38.331552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.145 qpair failed and we were unable to recover it. 00:38:39.145 [2024-12-07 11:50:38.341399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.145 [2024-12-07 11:50:38.341508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.341529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.145 [2024-12-07 11:50:38.341540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.145 [2024-12-07 11:50:38.341549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.145 [2024-12-07 11:50:38.341570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.145 qpair failed and we were unable to recover it. 00:38:39.145 [2024-12-07 11:50:38.351670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.145 [2024-12-07 11:50:38.351760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.351783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.145 [2024-12-07 11:50:38.351798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.145 [2024-12-07 11:50:38.351807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.145 [2024-12-07 11:50:38.351829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.145 qpair failed and we were unable to recover it. 00:38:39.145 [2024-12-07 11:50:38.361635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.145 [2024-12-07 11:50:38.361734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.361755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.145 [2024-12-07 11:50:38.361766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.145 [2024-12-07 11:50:38.361775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.145 [2024-12-07 11:50:38.361800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.145 qpair failed and we were unable to recover it. 00:38:39.145 [2024-12-07 11:50:38.371576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.145 [2024-12-07 11:50:38.371655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.371676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.145 [2024-12-07 11:50:38.371687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.145 [2024-12-07 11:50:38.371700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.145 [2024-12-07 11:50:38.371722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.145 qpair failed and we were unable to recover it. 00:38:39.145 [2024-12-07 11:50:38.381541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.145 [2024-12-07 11:50:38.381607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.381627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.145 [2024-12-07 11:50:38.381639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.145 [2024-12-07 11:50:38.381647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.145 [2024-12-07 11:50:38.381669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.145 qpair failed and we were unable to recover it. 00:38:39.145 [2024-12-07 11:50:38.391766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.145 [2024-12-07 11:50:38.391848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.391869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.145 [2024-12-07 11:50:38.391880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.145 [2024-12-07 11:50:38.391889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.145 [2024-12-07 11:50:38.391910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.145 qpair failed and we were unable to recover it. 00:38:39.145 [2024-12-07 11:50:38.401794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.145 [2024-12-07 11:50:38.401867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.145 [2024-12-07 11:50:38.401888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.146 [2024-12-07 11:50:38.401899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.146 [2024-12-07 11:50:38.401907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.146 [2024-12-07 11:50:38.401929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.146 qpair failed and we were unable to recover it. 00:38:39.146 [2024-12-07 11:50:38.411643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.146 [2024-12-07 11:50:38.411711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.146 [2024-12-07 11:50:38.411731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.146 [2024-12-07 11:50:38.411742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.146 [2024-12-07 11:50:38.411751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.146 [2024-12-07 11:50:38.411772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.146 qpair failed and we were unable to recover it. 00:38:39.146 [2024-12-07 11:50:38.421647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.146 [2024-12-07 11:50:38.421711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.146 [2024-12-07 11:50:38.421731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.146 [2024-12-07 11:50:38.421742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.146 [2024-12-07 11:50:38.421751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.146 [2024-12-07 11:50:38.421773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.146 qpair failed and we were unable to recover it. 00:38:39.146 [2024-12-07 11:50:38.431830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.146 [2024-12-07 11:50:38.431912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.146 [2024-12-07 11:50:38.431933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.146 [2024-12-07 11:50:38.431944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.146 [2024-12-07 11:50:38.431953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.146 [2024-12-07 11:50:38.431975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.146 qpair failed and we were unable to recover it. 00:38:39.146 [2024-12-07 11:50:38.441925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.146 [2024-12-07 11:50:38.442026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.146 [2024-12-07 11:50:38.442047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.146 [2024-12-07 11:50:38.442058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.146 [2024-12-07 11:50:38.442067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.146 [2024-12-07 11:50:38.442089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.146 qpair failed and we were unable to recover it. 00:38:39.146 [2024-12-07 11:50:38.451648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.146 [2024-12-07 11:50:38.451745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.146 [2024-12-07 11:50:38.451766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.146 [2024-12-07 11:50:38.451778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.146 [2024-12-07 11:50:38.451787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.146 [2024-12-07 11:50:38.451808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.146 qpair failed and we were unable to recover it. 00:38:39.146 [2024-12-07 11:50:38.461665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.146 [2024-12-07 11:50:38.461728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.146 [2024-12-07 11:50:38.461752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.146 [2024-12-07 11:50:38.461763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.146 [2024-12-07 11:50:38.461772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.146 [2024-12-07 11:50:38.461794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.146 qpair failed and we were unable to recover it. 00:38:39.146 [2024-12-07 11:50:38.471922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.146 [2024-12-07 11:50:38.471996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.146 [2024-12-07 11:50:38.472021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.146 [2024-12-07 11:50:38.472032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.146 [2024-12-07 11:50:38.472040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.146 [2024-12-07 11:50:38.472063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.146 qpair failed and we were unable to recover it. 00:38:39.146 [2024-12-07 11:50:38.481992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.146 [2024-12-07 11:50:38.482073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.146 [2024-12-07 11:50:38.482094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.146 [2024-12-07 11:50:38.482105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.146 [2024-12-07 11:50:38.482114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.146 [2024-12-07 11:50:38.482135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.146 qpair failed and we were unable to recover it. 00:38:39.146 [2024-12-07 11:50:38.491873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.146 [2024-12-07 11:50:38.491978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.146 [2024-12-07 11:50:38.491999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.146 [2024-12-07 11:50:38.492018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.146 [2024-12-07 11:50:38.492027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.146 [2024-12-07 11:50:38.492049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.146 qpair failed and we were unable to recover it. 00:38:39.410 [2024-12-07 11:50:38.501908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.410 [2024-12-07 11:50:38.502026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.410 [2024-12-07 11:50:38.502048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.410 [2024-12-07 11:50:38.502072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.410 [2024-12-07 11:50:38.502081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.410 [2024-12-07 11:50:38.502103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.410 qpair failed and we were unable to recover it. 00:38:39.410 [2024-12-07 11:50:38.512095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.410 [2024-12-07 11:50:38.512167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.410 [2024-12-07 11:50:38.512193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.410 [2024-12-07 11:50:38.512204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.410 [2024-12-07 11:50:38.512213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.410 [2024-12-07 11:50:38.512235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.410 qpair failed and we were unable to recover it. 00:38:39.410 [2024-12-07 11:50:38.522069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.410 [2024-12-07 11:50:38.522160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.410 [2024-12-07 11:50:38.522180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.410 [2024-12-07 11:50:38.522191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.410 [2024-12-07 11:50:38.522200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.410 [2024-12-07 11:50:38.522222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.410 qpair failed and we were unable to recover it. 00:38:39.410 [2024-12-07 11:50:38.531975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.410 [2024-12-07 11:50:38.532050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.410 [2024-12-07 11:50:38.532071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.410 [2024-12-07 11:50:38.532082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.410 [2024-12-07 11:50:38.532091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.410 [2024-12-07 11:50:38.532116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.410 qpair failed and we were unable to recover it. 00:38:39.410 [2024-12-07 11:50:38.541998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.410 [2024-12-07 11:50:38.542074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.410 [2024-12-07 11:50:38.542095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.410 [2024-12-07 11:50:38.542105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.410 [2024-12-07 11:50:38.542114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.410 [2024-12-07 11:50:38.542140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.410 qpair failed and we were unable to recover it. 00:38:39.410 [2024-12-07 11:50:38.552326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.410 [2024-12-07 11:50:38.552398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.410 [2024-12-07 11:50:38.552419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.410 [2024-12-07 11:50:38.552430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.410 [2024-12-07 11:50:38.552438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.410 [2024-12-07 11:50:38.552460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.410 qpair failed and we were unable to recover it. 00:38:39.410 [2024-12-07 11:50:38.562396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.410 [2024-12-07 11:50:38.562489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.410 [2024-12-07 11:50:38.562510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.410 [2024-12-07 11:50:38.562521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.410 [2024-12-07 11:50:38.562530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.410 [2024-12-07 11:50:38.562550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.410 qpair failed and we were unable to recover it. 00:38:39.410 [2024-12-07 11:50:38.572023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.410 [2024-12-07 11:50:38.572092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.410 [2024-12-07 11:50:38.572112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.410 [2024-12-07 11:50:38.572123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.410 [2024-12-07 11:50:38.572132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.410 [2024-12-07 11:50:38.572154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.410 qpair failed and we were unable to recover it. 00:38:39.410 [2024-12-07 11:50:38.582049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.410 [2024-12-07 11:50:38.582145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.410 [2024-12-07 11:50:38.582166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.410 [2024-12-07 11:50:38.582177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.410 [2024-12-07 11:50:38.582186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.410 [2024-12-07 11:50:38.582207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.410 qpair failed and we were unable to recover it. 00:38:39.410 [2024-12-07 11:50:38.592347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.410 [2024-12-07 11:50:38.592426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.410 [2024-12-07 11:50:38.592446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.410 [2024-12-07 11:50:38.592458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.410 [2024-12-07 11:50:38.592467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.410 [2024-12-07 11:50:38.592488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.410 qpair failed and we were unable to recover it. 00:38:39.410 [2024-12-07 11:50:38.602351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.410 [2024-12-07 11:50:38.602451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.410 [2024-12-07 11:50:38.602471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.410 [2024-12-07 11:50:38.602483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.410 [2024-12-07 11:50:38.602492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.410 [2024-12-07 11:50:38.602513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.410 qpair failed and we were unable to recover it. 00:38:39.410 [2024-12-07 11:50:38.612214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.411 [2024-12-07 11:50:38.612287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.411 [2024-12-07 11:50:38.612307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.411 [2024-12-07 11:50:38.612318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.411 [2024-12-07 11:50:38.612327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.411 [2024-12-07 11:50:38.612349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.411 qpair failed and we were unable to recover it. 00:38:39.411 [2024-12-07 11:50:38.622167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.411 [2024-12-07 11:50:38.622232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.411 [2024-12-07 11:50:38.622253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.411 [2024-12-07 11:50:38.622264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.411 [2024-12-07 11:50:38.622273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.411 [2024-12-07 11:50:38.622294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.411 qpair failed and we were unable to recover it. 00:38:39.411 [2024-12-07 11:50:38.632435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.411 [2024-12-07 11:50:38.632511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.411 [2024-12-07 11:50:38.632531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.411 [2024-12-07 11:50:38.632545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.411 [2024-12-07 11:50:38.632554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.411 [2024-12-07 11:50:38.632577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.411 qpair failed and we were unable to recover it. 00:38:39.411 [2024-12-07 11:50:38.642464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.411 [2024-12-07 11:50:38.642534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.411 [2024-12-07 11:50:38.642554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.411 [2024-12-07 11:50:38.642565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.411 [2024-12-07 11:50:38.642574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.411 [2024-12-07 11:50:38.642595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.411 qpair failed and we were unable to recover it. 00:38:39.411 [2024-12-07 11:50:38.652288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.411 [2024-12-07 11:50:38.652357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.411 [2024-12-07 11:50:38.652378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.411 [2024-12-07 11:50:38.652389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.411 [2024-12-07 11:50:38.652398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.411 [2024-12-07 11:50:38.652420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.411 qpair failed and we were unable to recover it. 00:38:39.411 [2024-12-07 11:50:38.662288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.411 [2024-12-07 11:50:38.662385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.411 [2024-12-07 11:50:38.662408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.411 [2024-12-07 11:50:38.662420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.411 [2024-12-07 11:50:38.662429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.411 [2024-12-07 11:50:38.662453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.411 qpair failed and we were unable to recover it. 00:38:39.411 [2024-12-07 11:50:38.672569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.411 [2024-12-07 11:50:38.672650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.411 [2024-12-07 11:50:38.672672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.411 [2024-12-07 11:50:38.672683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.411 [2024-12-07 11:50:38.672692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.411 [2024-12-07 11:50:38.672719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.411 qpair failed and we were unable to recover it. 00:38:39.411 [2024-12-07 11:50:38.682608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.411 [2024-12-07 11:50:38.682719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.411 [2024-12-07 11:50:38.682740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.411 [2024-12-07 11:50:38.682751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.411 [2024-12-07 11:50:38.682760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.411 [2024-12-07 11:50:38.682781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.411 qpair failed and we were unable to recover it. 00:38:39.411 [2024-12-07 11:50:38.692466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.411 [2024-12-07 11:50:38.692582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.411 [2024-12-07 11:50:38.692602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.411 [2024-12-07 11:50:38.692613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.411 [2024-12-07 11:50:38.692623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.411 [2024-12-07 11:50:38.692644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.411 qpair failed and we were unable to recover it. 00:38:39.411 [2024-12-07 11:50:38.702341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.411 [2024-12-07 11:50:38.702422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.411 [2024-12-07 11:50:38.702443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.411 [2024-12-07 11:50:38.702455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.411 [2024-12-07 11:50:38.702464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.411 [2024-12-07 11:50:38.702502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.411 qpair failed and we were unable to recover it. 00:38:39.411 [2024-12-07 11:50:38.712670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.411 [2024-12-07 11:50:38.712750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.411 [2024-12-07 11:50:38.712771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.411 [2024-12-07 11:50:38.712782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.411 [2024-12-07 11:50:38.712790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.411 [2024-12-07 11:50:38.712812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.411 qpair failed and we were unable to recover it. 00:38:39.411 [2024-12-07 11:50:38.722709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.411 [2024-12-07 11:50:38.722801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.411 [2024-12-07 11:50:38.722832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.411 [2024-12-07 11:50:38.722846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.411 [2024-12-07 11:50:38.722856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.411 [2024-12-07 11:50:38.722884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.411 qpair failed and we were unable to recover it. 00:38:39.411 [2024-12-07 11:50:38.732519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.411 [2024-12-07 11:50:38.732591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.411 [2024-12-07 11:50:38.732619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.411 [2024-12-07 11:50:38.732631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.411 [2024-12-07 11:50:38.732641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.411 [2024-12-07 11:50:38.732666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.411 qpair failed and we were unable to recover it. 00:38:39.411 [2024-12-07 11:50:38.742574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.412 [2024-12-07 11:50:38.742638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.412 [2024-12-07 11:50:38.742659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.412 [2024-12-07 11:50:38.742671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.412 [2024-12-07 11:50:38.742680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.412 [2024-12-07 11:50:38.742702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.412 qpair failed and we were unable to recover it. 00:38:39.412 [2024-12-07 11:50:38.752800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.412 [2024-12-07 11:50:38.752873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.412 [2024-12-07 11:50:38.752894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.412 [2024-12-07 11:50:38.752905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.412 [2024-12-07 11:50:38.752914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.412 [2024-12-07 11:50:38.752936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.412 qpair failed and we were unable to recover it. 00:38:39.676 [2024-12-07 11:50:38.762927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.676 [2024-12-07 11:50:38.763044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.676 [2024-12-07 11:50:38.763069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.676 [2024-12-07 11:50:38.763081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.676 [2024-12-07 11:50:38.763090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.676 [2024-12-07 11:50:38.763118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.676 qpair failed and we were unable to recover it. 00:38:39.676 [2024-12-07 11:50:38.772633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.676 [2024-12-07 11:50:38.772698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.676 [2024-12-07 11:50:38.772718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.676 [2024-12-07 11:50:38.772729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.676 [2024-12-07 11:50:38.772738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.676 [2024-12-07 11:50:38.772760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.676 qpair failed and we were unable to recover it. 00:38:39.676 [2024-12-07 11:50:38.782624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.676 [2024-12-07 11:50:38.782691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.676 [2024-12-07 11:50:38.782712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.676 [2024-12-07 11:50:38.782723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.676 [2024-12-07 11:50:38.782733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.676 [2024-12-07 11:50:38.782754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.676 qpair failed and we were unable to recover it. 00:38:39.676 [2024-12-07 11:50:38.792877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.676 [2024-12-07 11:50:38.792955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.677 [2024-12-07 11:50:38.792975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.677 [2024-12-07 11:50:38.792986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.677 [2024-12-07 11:50:38.792996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.677 [2024-12-07 11:50:38.793022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.677 qpair failed and we were unable to recover it. 00:38:39.677 [2024-12-07 11:50:38.802936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.677 [2024-12-07 11:50:38.803002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.677 [2024-12-07 11:50:38.803028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.677 [2024-12-07 11:50:38.803040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.677 [2024-12-07 11:50:38.803052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.677 [2024-12-07 11:50:38.803074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.677 qpair failed and we were unable to recover it. 00:38:39.677 [2024-12-07 11:50:38.812765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.677 [2024-12-07 11:50:38.812830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.677 [2024-12-07 11:50:38.812851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.677 [2024-12-07 11:50:38.812862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.677 [2024-12-07 11:50:38.812871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.677 [2024-12-07 11:50:38.812892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.677 qpair failed and we were unable to recover it. 00:38:39.677 [2024-12-07 11:50:38.822728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.677 [2024-12-07 11:50:38.822794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.677 [2024-12-07 11:50:38.822815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.677 [2024-12-07 11:50:38.822826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.677 [2024-12-07 11:50:38.822835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.677 [2024-12-07 11:50:38.822857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.677 qpair failed and we were unable to recover it. 00:38:39.677 [2024-12-07 11:50:38.833023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.677 [2024-12-07 11:50:38.833098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.677 [2024-12-07 11:50:38.833118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.677 [2024-12-07 11:50:38.833130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.677 [2024-12-07 11:50:38.833139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.677 [2024-12-07 11:50:38.833161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.677 qpair failed and we were unable to recover it. 00:38:39.677 [2024-12-07 11:50:38.843040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.677 [2024-12-07 11:50:38.843112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.677 [2024-12-07 11:50:38.843132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.677 [2024-12-07 11:50:38.843143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.677 [2024-12-07 11:50:38.843152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.677 [2024-12-07 11:50:38.843174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.677 qpair failed and we were unable to recover it. 00:38:39.677 [2024-12-07 11:50:38.852791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.677 [2024-12-07 11:50:38.852858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.677 [2024-12-07 11:50:38.852879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.677 [2024-12-07 11:50:38.852890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.677 [2024-12-07 11:50:38.852899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.677 [2024-12-07 11:50:38.852921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.677 qpair failed and we were unable to recover it. 00:38:39.677 [2024-12-07 11:50:38.862816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.677 [2024-12-07 11:50:38.862889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.677 [2024-12-07 11:50:38.862910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.677 [2024-12-07 11:50:38.862921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.677 [2024-12-07 11:50:38.862930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.677 [2024-12-07 11:50:38.862953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.677 qpair failed and we were unable to recover it. 00:38:39.677 [2024-12-07 11:50:38.873072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.677 [2024-12-07 11:50:38.873145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.677 [2024-12-07 11:50:38.873166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.677 [2024-12-07 11:50:38.873177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.677 [2024-12-07 11:50:38.873186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.677 [2024-12-07 11:50:38.873212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.677 qpair failed and we were unable to recover it. 00:38:39.677 [2024-12-07 11:50:38.883123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.677 [2024-12-07 11:50:38.883227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.677 [2024-12-07 11:50:38.883248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.677 [2024-12-07 11:50:38.883259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.677 [2024-12-07 11:50:38.883268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.677 [2024-12-07 11:50:38.883290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.677 qpair failed and we were unable to recover it. 00:38:39.677 [2024-12-07 11:50:38.892999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.677 [2024-12-07 11:50:38.893073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.677 [2024-12-07 11:50:38.893097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.677 [2024-12-07 11:50:38.893108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.677 [2024-12-07 11:50:38.893117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.677 [2024-12-07 11:50:38.893139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.677 qpair failed and we were unable to recover it. 00:38:39.677 [2024-12-07 11:50:38.902918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.677 [2024-12-07 11:50:38.902980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.677 [2024-12-07 11:50:38.903000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.677 [2024-12-07 11:50:38.903016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.677 [2024-12-07 11:50:38.903026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.677 [2024-12-07 11:50:38.903047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.677 qpair failed and we were unable to recover it. 00:38:39.677 [2024-12-07 11:50:38.913332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.677 [2024-12-07 11:50:38.913402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.677 [2024-12-07 11:50:38.913423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.677 [2024-12-07 11:50:38.913434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.677 [2024-12-07 11:50:38.913443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.677 [2024-12-07 11:50:38.913465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.677 qpair failed and we were unable to recover it. 00:38:39.677 [2024-12-07 11:50:38.923277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.677 [2024-12-07 11:50:38.923351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.678 [2024-12-07 11:50:38.923371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.678 [2024-12-07 11:50:38.923382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.678 [2024-12-07 11:50:38.923391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.678 [2024-12-07 11:50:38.923413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.678 qpair failed and we were unable to recover it. 00:38:39.678 [2024-12-07 11:50:38.933147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.678 [2024-12-07 11:50:38.933209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.678 [2024-12-07 11:50:38.933229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.678 [2024-12-07 11:50:38.933240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.678 [2024-12-07 11:50:38.933252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.678 [2024-12-07 11:50:38.933273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.678 qpair failed and we were unable to recover it. 00:38:39.678 [2024-12-07 11:50:38.943069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.678 [2024-12-07 11:50:38.943134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.678 [2024-12-07 11:50:38.943154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.678 [2024-12-07 11:50:38.943165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.678 [2024-12-07 11:50:38.943174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.678 [2024-12-07 11:50:38.943195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.678 qpair failed and we were unable to recover it. 00:38:39.678 [2024-12-07 11:50:38.953346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.678 [2024-12-07 11:50:38.953421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.678 [2024-12-07 11:50:38.953442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.678 [2024-12-07 11:50:38.953454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.678 [2024-12-07 11:50:38.953463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.678 [2024-12-07 11:50:38.953484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.678 qpair failed and we were unable to recover it. 00:38:39.678 [2024-12-07 11:50:38.963371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.678 [2024-12-07 11:50:38.963452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.678 [2024-12-07 11:50:38.963473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.678 [2024-12-07 11:50:38.963485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.678 [2024-12-07 11:50:38.963494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.678 [2024-12-07 11:50:38.963515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.678 qpair failed and we were unable to recover it. 00:38:39.678 [2024-12-07 11:50:38.973217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.678 [2024-12-07 11:50:38.973288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.678 [2024-12-07 11:50:38.973308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.678 [2024-12-07 11:50:38.973319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.678 [2024-12-07 11:50:38.973328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.678 [2024-12-07 11:50:38.973349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.678 qpair failed and we were unable to recover it. 00:38:39.678 [2024-12-07 11:50:38.983251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.678 [2024-12-07 11:50:38.983322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.678 [2024-12-07 11:50:38.983343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.678 [2024-12-07 11:50:38.983354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.678 [2024-12-07 11:50:38.983362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.678 [2024-12-07 11:50:38.983383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.678 qpair failed and we were unable to recover it. 00:38:39.678 [2024-12-07 11:50:38.993352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.678 [2024-12-07 11:50:38.993431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.678 [2024-12-07 11:50:38.993452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.678 [2024-12-07 11:50:38.993463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.678 [2024-12-07 11:50:38.993471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.678 [2024-12-07 11:50:38.993493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.678 qpair failed and we were unable to recover it. 00:38:39.678 [2024-12-07 11:50:39.003503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.678 [2024-12-07 11:50:39.003605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.678 [2024-12-07 11:50:39.003629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.678 [2024-12-07 11:50:39.003642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.678 [2024-12-07 11:50:39.003651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.678 [2024-12-07 11:50:39.003675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.678 qpair failed and we were unable to recover it. 00:38:39.678 [2024-12-07 11:50:39.013302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.678 [2024-12-07 11:50:39.013399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.678 [2024-12-07 11:50:39.013420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.678 [2024-12-07 11:50:39.013432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.678 [2024-12-07 11:50:39.013441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.678 [2024-12-07 11:50:39.013464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.678 qpair failed and we were unable to recover it. 00:38:39.678 [2024-12-07 11:50:39.023346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.678 [2024-12-07 11:50:39.023414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.678 [2024-12-07 11:50:39.023439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.678 [2024-12-07 11:50:39.023470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.678 [2024-12-07 11:50:39.023479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.678 [2024-12-07 11:50:39.023502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.678 qpair failed and we were unable to recover it. 00:38:39.942 [2024-12-07 11:50:39.033541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.942 [2024-12-07 11:50:39.033616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.942 [2024-12-07 11:50:39.033637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.942 [2024-12-07 11:50:39.033649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.942 [2024-12-07 11:50:39.033657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.942 [2024-12-07 11:50:39.033679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.942 qpair failed and we were unable to recover it. 00:38:39.942 [2024-12-07 11:50:39.043647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.942 [2024-12-07 11:50:39.043747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.942 [2024-12-07 11:50:39.043768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.942 [2024-12-07 11:50:39.043779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.942 [2024-12-07 11:50:39.043788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.942 [2024-12-07 11:50:39.043813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.942 qpair failed and we were unable to recover it. 00:38:39.942 [2024-12-07 11:50:39.053338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.942 [2024-12-07 11:50:39.053407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.942 [2024-12-07 11:50:39.053429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.942 [2024-12-07 11:50:39.053440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.942 [2024-12-07 11:50:39.053449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.942 [2024-12-07 11:50:39.053470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.942 qpair failed and we were unable to recover it. 00:38:39.942 [2024-12-07 11:50:39.063457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.942 [2024-12-07 11:50:39.063525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.942 [2024-12-07 11:50:39.063546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.942 [2024-12-07 11:50:39.063561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.942 [2024-12-07 11:50:39.063570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.942 [2024-12-07 11:50:39.063594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.942 qpair failed and we were unable to recover it. 00:38:39.942 [2024-12-07 11:50:39.073600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.942 [2024-12-07 11:50:39.073674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.942 [2024-12-07 11:50:39.073695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.942 [2024-12-07 11:50:39.073706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.942 [2024-12-07 11:50:39.073715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.942 [2024-12-07 11:50:39.073737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.942 qpair failed and we were unable to recover it. 00:38:39.942 [2024-12-07 11:50:39.083686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.942 [2024-12-07 11:50:39.083782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.942 [2024-12-07 11:50:39.083802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.942 [2024-12-07 11:50:39.083813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.942 [2024-12-07 11:50:39.083822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.942 [2024-12-07 11:50:39.083844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.942 qpair failed and we were unable to recover it. 00:38:39.942 [2024-12-07 11:50:39.093527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.942 [2024-12-07 11:50:39.093594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.942 [2024-12-07 11:50:39.093615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.942 [2024-12-07 11:50:39.093626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.942 [2024-12-07 11:50:39.093634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.942 [2024-12-07 11:50:39.093655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.942 qpair failed and we were unable to recover it. 00:38:39.942 [2024-12-07 11:50:39.103462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.942 [2024-12-07 11:50:39.103576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.942 [2024-12-07 11:50:39.103597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.942 [2024-12-07 11:50:39.103608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.942 [2024-12-07 11:50:39.103617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.942 [2024-12-07 11:50:39.103642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.942 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.113763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.943 [2024-12-07 11:50:39.113837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.943 [2024-12-07 11:50:39.113858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.943 [2024-12-07 11:50:39.113870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.943 [2024-12-07 11:50:39.113879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.943 [2024-12-07 11:50:39.113900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.943 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.123775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.943 [2024-12-07 11:50:39.123852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.943 [2024-12-07 11:50:39.123873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.943 [2024-12-07 11:50:39.123884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.943 [2024-12-07 11:50:39.123893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.943 [2024-12-07 11:50:39.123914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.943 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.133638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.943 [2024-12-07 11:50:39.133707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.943 [2024-12-07 11:50:39.133728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.943 [2024-12-07 11:50:39.133739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.943 [2024-12-07 11:50:39.133747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.943 [2024-12-07 11:50:39.133769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.943 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.143634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.943 [2024-12-07 11:50:39.143709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.943 [2024-12-07 11:50:39.143731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.943 [2024-12-07 11:50:39.143746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.943 [2024-12-07 11:50:39.143755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.943 [2024-12-07 11:50:39.143777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.943 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.153885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.943 [2024-12-07 11:50:39.153958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.943 [2024-12-07 11:50:39.153979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.943 [2024-12-07 11:50:39.153990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.943 [2024-12-07 11:50:39.153999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.943 [2024-12-07 11:50:39.154026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.943 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.163933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.943 [2024-12-07 11:50:39.164018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.943 [2024-12-07 11:50:39.164039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.943 [2024-12-07 11:50:39.164050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.943 [2024-12-07 11:50:39.164059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.943 [2024-12-07 11:50:39.164081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.943 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.173722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.943 [2024-12-07 11:50:39.173785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.943 [2024-12-07 11:50:39.173806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.943 [2024-12-07 11:50:39.173817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.943 [2024-12-07 11:50:39.173825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.943 [2024-12-07 11:50:39.173847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.943 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.183747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.943 [2024-12-07 11:50:39.183817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.943 [2024-12-07 11:50:39.183839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.943 [2024-12-07 11:50:39.183850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.943 [2024-12-07 11:50:39.183858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.943 [2024-12-07 11:50:39.183880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.943 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.193973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.943 [2024-12-07 11:50:39.194052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.943 [2024-12-07 11:50:39.194073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.943 [2024-12-07 11:50:39.194087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.943 [2024-12-07 11:50:39.194096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.943 [2024-12-07 11:50:39.194117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.943 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.204040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.943 [2024-12-07 11:50:39.204112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.943 [2024-12-07 11:50:39.204133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.943 [2024-12-07 11:50:39.204144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.943 [2024-12-07 11:50:39.204153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.943 [2024-12-07 11:50:39.204176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.943 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.213852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.943 [2024-12-07 11:50:39.213926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.943 [2024-12-07 11:50:39.213947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.943 [2024-12-07 11:50:39.213957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.943 [2024-12-07 11:50:39.213966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.943 [2024-12-07 11:50:39.213991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.943 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.223804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.943 [2024-12-07 11:50:39.223874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.943 [2024-12-07 11:50:39.223898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.943 [2024-12-07 11:50:39.223913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.943 [2024-12-07 11:50:39.223922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.943 [2024-12-07 11:50:39.223945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.943 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.234082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.943 [2024-12-07 11:50:39.234158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.943 [2024-12-07 11:50:39.234180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.943 [2024-12-07 11:50:39.234191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.943 [2024-12-07 11:50:39.234200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.943 [2024-12-07 11:50:39.234229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.943 qpair failed and we were unable to recover it. 00:38:39.943 [2024-12-07 11:50:39.244104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.944 [2024-12-07 11:50:39.244176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.944 [2024-12-07 11:50:39.244196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.944 [2024-12-07 11:50:39.244207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.944 [2024-12-07 11:50:39.244216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.944 [2024-12-07 11:50:39.244238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.944 qpair failed and we were unable to recover it. 00:38:39.944 [2024-12-07 11:50:39.253918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.944 [2024-12-07 11:50:39.253986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.944 [2024-12-07 11:50:39.254007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.944 [2024-12-07 11:50:39.254022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.944 [2024-12-07 11:50:39.254031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.944 [2024-12-07 11:50:39.254053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.944 qpair failed and we were unable to recover it. 00:38:39.944 [2024-12-07 11:50:39.263965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.944 [2024-12-07 11:50:39.264037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.944 [2024-12-07 11:50:39.264058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.944 [2024-12-07 11:50:39.264069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.944 [2024-12-07 11:50:39.264078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.944 [2024-12-07 11:50:39.264100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.944 qpair failed and we were unable to recover it. 00:38:39.944 [2024-12-07 11:50:39.274115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.944 [2024-12-07 11:50:39.274191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.944 [2024-12-07 11:50:39.274212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.944 [2024-12-07 11:50:39.274223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.944 [2024-12-07 11:50:39.274232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.944 [2024-12-07 11:50:39.274253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.944 qpair failed and we were unable to recover it. 00:38:39.944 [2024-12-07 11:50:39.284223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.944 [2024-12-07 11:50:39.284297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.944 [2024-12-07 11:50:39.284318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.944 [2024-12-07 11:50:39.284329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.944 [2024-12-07 11:50:39.284338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:39.944 [2024-12-07 11:50:39.284359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:39.944 qpair failed and we were unable to recover it. 00:38:40.207 [2024-12-07 11:50:39.293970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.207 [2024-12-07 11:50:39.294051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.207 [2024-12-07 11:50:39.294072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.207 [2024-12-07 11:50:39.294083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.207 [2024-12-07 11:50:39.294092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.207 [2024-12-07 11:50:39.294113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.207 qpair failed and we were unable to recover it. 00:38:40.207 [2024-12-07 11:50:39.304083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.207 [2024-12-07 11:50:39.304150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.207 [2024-12-07 11:50:39.304171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.207 [2024-12-07 11:50:39.304183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.207 [2024-12-07 11:50:39.304191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.207 [2024-12-07 11:50:39.304213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.207 qpair failed and we were unable to recover it. 00:38:40.207 [2024-12-07 11:50:39.314320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.207 [2024-12-07 11:50:39.314439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.207 [2024-12-07 11:50:39.314460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.207 [2024-12-07 11:50:39.314471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.207 [2024-12-07 11:50:39.314479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.207 [2024-12-07 11:50:39.314501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.207 qpair failed and we were unable to recover it. 00:38:40.207 [2024-12-07 11:50:39.324406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.207 [2024-12-07 11:50:39.324508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.207 [2024-12-07 11:50:39.324533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.207 [2024-12-07 11:50:39.324544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.207 [2024-12-07 11:50:39.324553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.207 [2024-12-07 11:50:39.324575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.207 qpair failed and we were unable to recover it. 00:38:40.207 [2024-12-07 11:50:39.334097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.207 [2024-12-07 11:50:39.334160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.207 [2024-12-07 11:50:39.334181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.207 [2024-12-07 11:50:39.334192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.207 [2024-12-07 11:50:39.334201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.207 [2024-12-07 11:50:39.334222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.207 qpair failed and we were unable to recover it. 00:38:40.207 [2024-12-07 11:50:39.344145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.207 [2024-12-07 11:50:39.344224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.207 [2024-12-07 11:50:39.344245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.207 [2024-12-07 11:50:39.344255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.207 [2024-12-07 11:50:39.344264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.207 [2024-12-07 11:50:39.344286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.207 qpair failed and we were unable to recover it. 00:38:40.207 [2024-12-07 11:50:39.354397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.207 [2024-12-07 11:50:39.354469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.208 [2024-12-07 11:50:39.354489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.208 [2024-12-07 11:50:39.354500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.208 [2024-12-07 11:50:39.354509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.208 [2024-12-07 11:50:39.354530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.208 qpair failed and we were unable to recover it. 00:38:40.208 [2024-12-07 11:50:39.364373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.208 [2024-12-07 11:50:39.364442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.208 [2024-12-07 11:50:39.364463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.208 [2024-12-07 11:50:39.364474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.208 [2024-12-07 11:50:39.364486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.208 [2024-12-07 11:50:39.364509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.208 qpair failed and we were unable to recover it. 00:38:40.208 [2024-12-07 11:50:39.374309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.208 [2024-12-07 11:50:39.374373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.208 [2024-12-07 11:50:39.374394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.208 [2024-12-07 11:50:39.374405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.208 [2024-12-07 11:50:39.374414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.208 [2024-12-07 11:50:39.374435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.208 qpair failed and we were unable to recover it. 00:38:40.208 [2024-12-07 11:50:39.384297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.208 [2024-12-07 11:50:39.384363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.208 [2024-12-07 11:50:39.384384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.208 [2024-12-07 11:50:39.384395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.208 [2024-12-07 11:50:39.384404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.208 [2024-12-07 11:50:39.384428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.208 qpair failed and we were unable to recover it. 00:38:40.208 [2024-12-07 11:50:39.394506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.208 [2024-12-07 11:50:39.394576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.208 [2024-12-07 11:50:39.394597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.208 [2024-12-07 11:50:39.394608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.208 [2024-12-07 11:50:39.394617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.208 [2024-12-07 11:50:39.394639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.208 qpair failed and we were unable to recover it. 00:38:40.208 [2024-12-07 11:50:39.404553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.208 [2024-12-07 11:50:39.404625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.208 [2024-12-07 11:50:39.404646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.208 [2024-12-07 11:50:39.404657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.208 [2024-12-07 11:50:39.404666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.208 [2024-12-07 11:50:39.404688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.208 qpair failed and we were unable to recover it. 00:38:40.208 [2024-12-07 11:50:39.414389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.208 [2024-12-07 11:50:39.414457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.208 [2024-12-07 11:50:39.414478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.208 [2024-12-07 11:50:39.414490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.208 [2024-12-07 11:50:39.414498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.208 [2024-12-07 11:50:39.414520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.208 qpair failed and we were unable to recover it. 00:38:40.208 [2024-12-07 11:50:39.424422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.208 [2024-12-07 11:50:39.424489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.208 [2024-12-07 11:50:39.424510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.208 [2024-12-07 11:50:39.424522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.208 [2024-12-07 11:50:39.424531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.208 [2024-12-07 11:50:39.424552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.208 qpair failed and we were unable to recover it. 00:38:40.208 [2024-12-07 11:50:39.434646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.208 [2024-12-07 11:50:39.434739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.208 [2024-12-07 11:50:39.434760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.208 [2024-12-07 11:50:39.434771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.208 [2024-12-07 11:50:39.434780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.208 [2024-12-07 11:50:39.434801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.208 qpair failed and we were unable to recover it. 00:38:40.208 [2024-12-07 11:50:39.444645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.208 [2024-12-07 11:50:39.444713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.208 [2024-12-07 11:50:39.444734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.208 [2024-12-07 11:50:39.444745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.208 [2024-12-07 11:50:39.444753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.208 [2024-12-07 11:50:39.444775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.208 qpair failed and we were unable to recover it. 00:38:40.208 [2024-12-07 11:50:39.454506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.208 [2024-12-07 11:50:39.454584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.208 [2024-12-07 11:50:39.454608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.208 [2024-12-07 11:50:39.454620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.208 [2024-12-07 11:50:39.454628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.208 [2024-12-07 11:50:39.454650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.208 qpair failed and we were unable to recover it. 00:38:40.208 [2024-12-07 11:50:39.464537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.208 [2024-12-07 11:50:39.464606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.208 [2024-12-07 11:50:39.464627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.208 [2024-12-07 11:50:39.464638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.208 [2024-12-07 11:50:39.464647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.208 [2024-12-07 11:50:39.464669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.208 qpair failed and we were unable to recover it. 00:38:40.208 [2024-12-07 11:50:39.474753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.208 [2024-12-07 11:50:39.474831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.208 [2024-12-07 11:50:39.474851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.208 [2024-12-07 11:50:39.474862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.208 [2024-12-07 11:50:39.474871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.208 [2024-12-07 11:50:39.474893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.208 qpair failed and we were unable to recover it. 00:38:40.208 [2024-12-07 11:50:39.484778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.208 [2024-12-07 11:50:39.484860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.209 [2024-12-07 11:50:39.484881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.209 [2024-12-07 11:50:39.484893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.209 [2024-12-07 11:50:39.484901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.209 [2024-12-07 11:50:39.484924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.209 qpair failed and we were unable to recover it. 00:38:40.209 [2024-12-07 11:50:39.494674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.209 [2024-12-07 11:50:39.494746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.209 [2024-12-07 11:50:39.494767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.209 [2024-12-07 11:50:39.494778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.209 [2024-12-07 11:50:39.494790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.209 [2024-12-07 11:50:39.494812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.209 qpair failed and we were unable to recover it. 00:38:40.209 [2024-12-07 11:50:39.504657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.209 [2024-12-07 11:50:39.504726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.209 [2024-12-07 11:50:39.504747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.209 [2024-12-07 11:50:39.504759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.209 [2024-12-07 11:50:39.504767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.209 [2024-12-07 11:50:39.504789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.209 qpair failed and we were unable to recover it. 00:38:40.209 [2024-12-07 11:50:39.514870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.209 [2024-12-07 11:50:39.514953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.209 [2024-12-07 11:50:39.514973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.209 [2024-12-07 11:50:39.514984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.209 [2024-12-07 11:50:39.514993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.209 [2024-12-07 11:50:39.515021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.209 qpair failed and we were unable to recover it. 00:38:40.209 [2024-12-07 11:50:39.524904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.209 [2024-12-07 11:50:39.524972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.209 [2024-12-07 11:50:39.524993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.209 [2024-12-07 11:50:39.525005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.209 [2024-12-07 11:50:39.525019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.209 [2024-12-07 11:50:39.525041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.209 qpair failed and we were unable to recover it. 00:38:40.209 [2024-12-07 11:50:39.534658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.209 [2024-12-07 11:50:39.534726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.209 [2024-12-07 11:50:39.534747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.209 [2024-12-07 11:50:39.534758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.209 [2024-12-07 11:50:39.534773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.209 [2024-12-07 11:50:39.534795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.209 qpair failed and we were unable to recover it. 00:38:40.209 [2024-12-07 11:50:39.544675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.209 [2024-12-07 11:50:39.544739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.209 [2024-12-07 11:50:39.544760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.209 [2024-12-07 11:50:39.544771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.209 [2024-12-07 11:50:39.544780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.209 [2024-12-07 11:50:39.544802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.209 qpair failed and we were unable to recover it. 00:38:40.209 [2024-12-07 11:50:39.554995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.209 [2024-12-07 11:50:39.555074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.209 [2024-12-07 11:50:39.555095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.209 [2024-12-07 11:50:39.555106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.209 [2024-12-07 11:50:39.555116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.209 [2024-12-07 11:50:39.555143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.209 qpair failed and we were unable to recover it. 00:38:40.473 [2024-12-07 11:50:39.565016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.473 [2024-12-07 11:50:39.565092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.473 [2024-12-07 11:50:39.565114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.473 [2024-12-07 11:50:39.565125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.473 [2024-12-07 11:50:39.565134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.473 [2024-12-07 11:50:39.565156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.473 qpair failed and we were unable to recover it. 00:38:40.473 [2024-12-07 11:50:39.574823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.473 [2024-12-07 11:50:39.574907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.473 [2024-12-07 11:50:39.574930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.473 [2024-12-07 11:50:39.574942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.473 [2024-12-07 11:50:39.574951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.473 [2024-12-07 11:50:39.574973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.473 qpair failed and we were unable to recover it. 00:38:40.473 [2024-12-07 11:50:39.584909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.473 [2024-12-07 11:50:39.585013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.473 [2024-12-07 11:50:39.585038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.473 [2024-12-07 11:50:39.585050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.473 [2024-12-07 11:50:39.585059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.473 [2024-12-07 11:50:39.585081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.473 qpair failed and we were unable to recover it. 00:38:40.473 [2024-12-07 11:50:39.595117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.473 [2024-12-07 11:50:39.595191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.473 [2024-12-07 11:50:39.595212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.473 [2024-12-07 11:50:39.595223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.473 [2024-12-07 11:50:39.595232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.473 [2024-12-07 11:50:39.595253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.473 qpair failed and we were unable to recover it. 00:38:40.473 [2024-12-07 11:50:39.605118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.473 [2024-12-07 11:50:39.605193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.473 [2024-12-07 11:50:39.605214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.473 [2024-12-07 11:50:39.605225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.473 [2024-12-07 11:50:39.605234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.473 [2024-12-07 11:50:39.605256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.473 qpair failed and we were unable to recover it. 00:38:40.473 [2024-12-07 11:50:39.614976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.473 [2024-12-07 11:50:39.615086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.473 [2024-12-07 11:50:39.615107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.473 [2024-12-07 11:50:39.615118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.473 [2024-12-07 11:50:39.615127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.473 [2024-12-07 11:50:39.615148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.473 qpair failed and we were unable to recover it. 00:38:40.473 [2024-12-07 11:50:39.625172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.473 [2024-12-07 11:50:39.625247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.473 [2024-12-07 11:50:39.625268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.473 [2024-12-07 11:50:39.625282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.473 [2024-12-07 11:50:39.625291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.473 [2024-12-07 11:50:39.625313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.473 qpair failed and we were unable to recover it. 00:38:40.473 [2024-12-07 11:50:39.635234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.473 [2024-12-07 11:50:39.635308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.473 [2024-12-07 11:50:39.635329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.473 [2024-12-07 11:50:39.635340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.473 [2024-12-07 11:50:39.635349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.473 [2024-12-07 11:50:39.635371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.473 qpair failed and we were unable to recover it. 00:38:40.473 [2024-12-07 11:50:39.645274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.474 [2024-12-07 11:50:39.645357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.474 [2024-12-07 11:50:39.645378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.474 [2024-12-07 11:50:39.645389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.474 [2024-12-07 11:50:39.645398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.474 [2024-12-07 11:50:39.645420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.474 qpair failed and we were unable to recover it. 00:38:40.474 [2024-12-07 11:50:39.655081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.474 [2024-12-07 11:50:39.655189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.474 [2024-12-07 11:50:39.655210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.474 [2024-12-07 11:50:39.655222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.474 [2024-12-07 11:50:39.655230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.474 [2024-12-07 11:50:39.655252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.474 qpair failed and we were unable to recover it. 00:38:40.474 [2024-12-07 11:50:39.665106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.474 [2024-12-07 11:50:39.665195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.474 [2024-12-07 11:50:39.665216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.474 [2024-12-07 11:50:39.665227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.474 [2024-12-07 11:50:39.665236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.474 [2024-12-07 11:50:39.665261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.474 qpair failed and we were unable to recover it. 00:38:40.474 [2024-12-07 11:50:39.675384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.474 [2024-12-07 11:50:39.675492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.474 [2024-12-07 11:50:39.675515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.474 [2024-12-07 11:50:39.675528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.474 [2024-12-07 11:50:39.675537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.474 [2024-12-07 11:50:39.675560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.474 qpair failed and we were unable to recover it. 00:38:40.474 [2024-12-07 11:50:39.685284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.474 [2024-12-07 11:50:39.685354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.474 [2024-12-07 11:50:39.685374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.474 [2024-12-07 11:50:39.685385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.474 [2024-12-07 11:50:39.685394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.474 [2024-12-07 11:50:39.685416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.474 qpair failed and we were unable to recover it. 00:38:40.474 [2024-12-07 11:50:39.695137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.474 [2024-12-07 11:50:39.695232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.474 [2024-12-07 11:50:39.695253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.474 [2024-12-07 11:50:39.695264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.474 [2024-12-07 11:50:39.695273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.474 [2024-12-07 11:50:39.695295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.474 qpair failed and we were unable to recover it. 00:38:40.474 [2024-12-07 11:50:39.705254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.474 [2024-12-07 11:50:39.705321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.474 [2024-12-07 11:50:39.705343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.474 [2024-12-07 11:50:39.705354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.474 [2024-12-07 11:50:39.705363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.474 [2024-12-07 11:50:39.705385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.474 qpair failed and we were unable to recover it. 00:38:40.474 [2024-12-07 11:50:39.715404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.474 [2024-12-07 11:50:39.715493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.474 [2024-12-07 11:50:39.715514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.474 [2024-12-07 11:50:39.715525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.474 [2024-12-07 11:50:39.715534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.474 [2024-12-07 11:50:39.715556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.474 qpair failed and we were unable to recover it. 00:38:40.474 [2024-12-07 11:50:39.725473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.474 [2024-12-07 11:50:39.725543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.474 [2024-12-07 11:50:39.725563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.474 [2024-12-07 11:50:39.725574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.474 [2024-12-07 11:50:39.725584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.474 [2024-12-07 11:50:39.725609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.474 qpair failed and we were unable to recover it. 00:38:40.474 [2024-12-07 11:50:39.735288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.474 [2024-12-07 11:50:39.735355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.474 [2024-12-07 11:50:39.735376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.474 [2024-12-07 11:50:39.735387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.474 [2024-12-07 11:50:39.735396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.474 [2024-12-07 11:50:39.735417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.475 qpair failed and we were unable to recover it. 00:38:40.475 [2024-12-07 11:50:39.745342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.475 [2024-12-07 11:50:39.745405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.475 [2024-12-07 11:50:39.745426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.475 [2024-12-07 11:50:39.745437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.475 [2024-12-07 11:50:39.745446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.475 [2024-12-07 11:50:39.745468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.475 qpair failed and we were unable to recover it. 00:38:40.475 [2024-12-07 11:50:39.755550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.475 [2024-12-07 11:50:39.755622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.475 [2024-12-07 11:50:39.755643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.475 [2024-12-07 11:50:39.755657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.475 [2024-12-07 11:50:39.755667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.475 [2024-12-07 11:50:39.755688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.475 qpair failed and we were unable to recover it. 00:38:40.475 [2024-12-07 11:50:39.765573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.475 [2024-12-07 11:50:39.765643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.475 [2024-12-07 11:50:39.765664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.475 [2024-12-07 11:50:39.765675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.475 [2024-12-07 11:50:39.765684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.475 [2024-12-07 11:50:39.765705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.475 qpair failed and we were unable to recover it. 00:38:40.475 [2024-12-07 11:50:39.775392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.475 [2024-12-07 11:50:39.775461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.475 [2024-12-07 11:50:39.775482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.475 [2024-12-07 11:50:39.775494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.475 [2024-12-07 11:50:39.775502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.475 [2024-12-07 11:50:39.775524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.475 qpair failed and we were unable to recover it. 00:38:40.475 [2024-12-07 11:50:39.785419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.475 [2024-12-07 11:50:39.785513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.475 [2024-12-07 11:50:39.785534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.475 [2024-12-07 11:50:39.785545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.475 [2024-12-07 11:50:39.785554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.475 [2024-12-07 11:50:39.785575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.475 qpair failed and we were unable to recover it. 00:38:40.475 [2024-12-07 11:50:39.795680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.475 [2024-12-07 11:50:39.795751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.475 [2024-12-07 11:50:39.795780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.475 [2024-12-07 11:50:39.795791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.475 [2024-12-07 11:50:39.795800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.475 [2024-12-07 11:50:39.795825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.475 qpair failed and we were unable to recover it. 00:38:40.475 [2024-12-07 11:50:39.805491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.475 [2024-12-07 11:50:39.805554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.475 [2024-12-07 11:50:39.805575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.475 [2024-12-07 11:50:39.805586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.475 [2024-12-07 11:50:39.805595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.475 [2024-12-07 11:50:39.805616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.475 qpair failed and we were unable to recover it. 00:38:40.475 [2024-12-07 11:50:39.815575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.475 [2024-12-07 11:50:39.815642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.475 [2024-12-07 11:50:39.815663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.475 [2024-12-07 11:50:39.815674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.475 [2024-12-07 11:50:39.815683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.475 [2024-12-07 11:50:39.815704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.475 qpair failed and we were unable to recover it. 00:38:40.740 [2024-12-07 11:50:39.825563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.740 [2024-12-07 11:50:39.825650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.740 [2024-12-07 11:50:39.825671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.740 [2024-12-07 11:50:39.825682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.740 [2024-12-07 11:50:39.825691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.740 [2024-12-07 11:50:39.825712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.740 qpair failed and we were unable to recover it. 00:38:40.740 [2024-12-07 11:50:39.835835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.740 [2024-12-07 11:50:39.835940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.740 [2024-12-07 11:50:39.835960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.740 [2024-12-07 11:50:39.835971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.740 [2024-12-07 11:50:39.835980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.740 [2024-12-07 11:50:39.836002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.740 qpair failed and we were unable to recover it. 00:38:40.740 [2024-12-07 11:50:39.845606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.740 [2024-12-07 11:50:39.845681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.740 [2024-12-07 11:50:39.845702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.740 [2024-12-07 11:50:39.845713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.740 [2024-12-07 11:50:39.845721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.740 [2024-12-07 11:50:39.845744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.740 qpair failed and we were unable to recover it. 00:38:40.740 [2024-12-07 11:50:39.855650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.740 [2024-12-07 11:50:39.855718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.740 [2024-12-07 11:50:39.855739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.740 [2024-12-07 11:50:39.855750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.740 [2024-12-07 11:50:39.855759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.740 [2024-12-07 11:50:39.855781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.740 qpair failed and we were unable to recover it. 00:38:40.740 [2024-12-07 11:50:39.865648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.740 [2024-12-07 11:50:39.865712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.740 [2024-12-07 11:50:39.865732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.740 [2024-12-07 11:50:39.865743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.740 [2024-12-07 11:50:39.865752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.740 [2024-12-07 11:50:39.865773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.740 qpair failed and we were unable to recover it. 00:38:40.740 [2024-12-07 11:50:39.875812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.740 [2024-12-07 11:50:39.875890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.740 [2024-12-07 11:50:39.875911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.740 [2024-12-07 11:50:39.875922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.740 [2024-12-07 11:50:39.875931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.740 [2024-12-07 11:50:39.875952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.740 qpair failed and we were unable to recover it. 00:38:40.740 [2024-12-07 11:50:39.885724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.740 [2024-12-07 11:50:39.885793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.740 [2024-12-07 11:50:39.885816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.740 [2024-12-07 11:50:39.885827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.740 [2024-12-07 11:50:39.885836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.740 [2024-12-07 11:50:39.885857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.740 qpair failed and we were unable to recover it. 00:38:40.740 [2024-12-07 11:50:39.895681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.740 [2024-12-07 11:50:39.895749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.740 [2024-12-07 11:50:39.895769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.740 [2024-12-07 11:50:39.895780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.740 [2024-12-07 11:50:39.895789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.740 [2024-12-07 11:50:39.895813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.740 qpair failed and we were unable to recover it. 00:38:40.740 [2024-12-07 11:50:39.905778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.740 [2024-12-07 11:50:39.905845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.740 [2024-12-07 11:50:39.905866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.740 [2024-12-07 11:50:39.905877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.740 [2024-12-07 11:50:39.905886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.740 [2024-12-07 11:50:39.905908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.740 qpair failed and we were unable to recover it. 00:38:40.740 [2024-12-07 11:50:39.915940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.740 [2024-12-07 11:50:39.916017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.740 [2024-12-07 11:50:39.916039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.740 [2024-12-07 11:50:39.916050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.740 [2024-12-07 11:50:39.916059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.740 [2024-12-07 11:50:39.916081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.740 qpair failed and we were unable to recover it. 00:38:40.740 [2024-12-07 11:50:39.925840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.740 [2024-12-07 11:50:39.925909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.740 [2024-12-07 11:50:39.925930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.740 [2024-12-07 11:50:39.925940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.740 [2024-12-07 11:50:39.925953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.740 [2024-12-07 11:50:39.925974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.740 qpair failed and we were unable to recover it. 00:38:40.740 [2024-12-07 11:50:39.935858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.740 [2024-12-07 11:50:39.935927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.740 [2024-12-07 11:50:39.935947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.740 [2024-12-07 11:50:39.935958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.741 [2024-12-07 11:50:39.935967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.741 [2024-12-07 11:50:39.935989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.741 qpair failed and we were unable to recover it. 00:38:40.741 [2024-12-07 11:50:39.945800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.741 [2024-12-07 11:50:39.945877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.741 [2024-12-07 11:50:39.945897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.741 [2024-12-07 11:50:39.945909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.741 [2024-12-07 11:50:39.945918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.741 [2024-12-07 11:50:39.945939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.741 qpair failed and we were unable to recover it. 00:38:40.741 [2024-12-07 11:50:39.956100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.741 [2024-12-07 11:50:39.956182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.741 [2024-12-07 11:50:39.956203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.741 [2024-12-07 11:50:39.956215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.741 [2024-12-07 11:50:39.956223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.741 [2024-12-07 11:50:39.956245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.741 qpair failed and we were unable to recover it. 00:38:40.741 [2024-12-07 11:50:39.965930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.741 [2024-12-07 11:50:39.966000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.741 [2024-12-07 11:50:39.966025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.741 [2024-12-07 11:50:39.966036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.741 [2024-12-07 11:50:39.966045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.741 [2024-12-07 11:50:39.966067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.741 qpair failed and we were unable to recover it. 00:38:40.741 [2024-12-07 11:50:39.975948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.741 [2024-12-07 11:50:39.976009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.741 [2024-12-07 11:50:39.976034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.741 [2024-12-07 11:50:39.976045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.741 [2024-12-07 11:50:39.976054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.741 [2024-12-07 11:50:39.976076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.741 qpair failed and we were unable to recover it. 00:38:40.741 [2024-12-07 11:50:39.986030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.741 [2024-12-07 11:50:39.986102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.741 [2024-12-07 11:50:39.986126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.741 [2024-12-07 11:50:39.986138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.741 [2024-12-07 11:50:39.986147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.741 [2024-12-07 11:50:39.986170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.741 qpair failed and we were unable to recover it. 00:38:40.741 [2024-12-07 11:50:39.996174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.741 [2024-12-07 11:50:39.996248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.741 [2024-12-07 11:50:39.996269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.741 [2024-12-07 11:50:39.996280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.741 [2024-12-07 11:50:39.996290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.741 [2024-12-07 11:50:39.996311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.741 qpair failed and we were unable to recover it. 00:38:40.741 [2024-12-07 11:50:40.006058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.741 [2024-12-07 11:50:40.006132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.741 [2024-12-07 11:50:40.006155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.741 [2024-12-07 11:50:40.006166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.741 [2024-12-07 11:50:40.006175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.741 [2024-12-07 11:50:40.006198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.741 qpair failed and we were unable to recover it. 00:38:40.741 [2024-12-07 11:50:40.016123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.741 [2024-12-07 11:50:40.016196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.741 [2024-12-07 11:50:40.016221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.741 [2024-12-07 11:50:40.016234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.741 [2024-12-07 11:50:40.016243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039ec00 00:38:40.741 [2024-12-07 11:50:40.016268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.741 qpair failed and we were unable to recover it. 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Write completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Write completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Write completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Write completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Write completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Write completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Write completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Write completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Write completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Write completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Write completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Write completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Write completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 [2024-12-07 11:50:40.017780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:38:40.741 [2024-12-07 11:50:40.017856] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:38:40.741 A controller has encountered a failure and is being reset. 00:38:40.741 [2024-12-07 11:50:40.017909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:38:40.741 Controller properly reset. 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.741 Read completed with error (sct=0, sc=8) 00:38:40.741 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Write completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 Read completed with error (sct=0, sc=8) 00:38:40.742 starting I/O failed 00:38:40.742 [2024-12-07 11:50:40.071455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:41.004 Initializing NVMe Controllers 00:38:41.004 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:41.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:41.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:41.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:41.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:41.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:41.004 Initialization complete. Launching workers. 00:38:41.004 Starting thread on core 1 00:38:41.004 Starting thread on core 2 00:38:41.004 Starting thread on core 3 00:38:41.004 Starting thread on core 0 00:38:41.004 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:41.004 00:38:41.004 real 0m11.687s 00:38:41.004 user 0m21.282s 00:38:41.004 sys 0m3.625s 00:38:41.004 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:41.004 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:41.004 ************************************ 00:38:41.004 END TEST nvmf_target_disconnect_tc2 00:38:41.004 ************************************ 00:38:41.004 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:41.004 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:41.004 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:41.004 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:41.004 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:38:41.004 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:41.004 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:38:41.004 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:41.004 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:41.004 rmmod nvme_tcp 00:38:41.004 rmmod nvme_fabrics 00:38:41.004 rmmod nvme_keyring 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2790713 ']' 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2790713 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2790713 ']' 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2790713 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2790713 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2790713' 00:38:41.005 killing process with pid 2790713 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2790713 00:38:41.005 11:50:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2790713 00:38:41.945 11:50:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:41.945 11:50:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:41.945 11:50:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:41.945 11:50:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:38:41.945 11:50:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:41.945 11:50:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:38:41.945 11:50:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:38:41.945 11:50:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:41.945 11:50:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:41.945 11:50:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.945 11:50:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.945 11:50:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:44.492 11:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:44.492 00:38:44.492 real 0m22.564s 00:38:44.492 user 0m51.586s 00:38:44.492 sys 0m9.840s 00:38:44.492 11:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:44.492 11:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:44.492 ************************************ 00:38:44.492 END TEST nvmf_target_disconnect 00:38:44.492 ************************************ 00:38:44.492 11:50:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:44.492 00:38:44.492 real 8m20.726s 00:38:44.492 user 18m38.853s 00:38:44.492 sys 2m27.253s 00:38:44.492 11:50:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:44.492 11:50:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.492 ************************************ 00:38:44.492 END TEST nvmf_host 00:38:44.492 ************************************ 00:38:44.492 11:50:43 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:38:44.492 11:50:43 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:38:44.492 11:50:43 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:44.492 11:50:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:44.492 11:50:43 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:44.492 11:50:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:44.492 ************************************ 00:38:44.492 START TEST nvmf_target_core_interrupt_mode 00:38:44.492 ************************************ 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:44.492 * Looking for test storage... 00:38:44.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:38:44.492 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:44.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.493 --rc genhtml_branch_coverage=1 00:38:44.493 --rc genhtml_function_coverage=1 00:38:44.493 --rc genhtml_legend=1 00:38:44.493 --rc geninfo_all_blocks=1 00:38:44.493 --rc geninfo_unexecuted_blocks=1 00:38:44.493 00:38:44.493 ' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:44.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.493 --rc genhtml_branch_coverage=1 00:38:44.493 --rc genhtml_function_coverage=1 00:38:44.493 --rc genhtml_legend=1 00:38:44.493 --rc geninfo_all_blocks=1 00:38:44.493 --rc geninfo_unexecuted_blocks=1 00:38:44.493 00:38:44.493 ' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:44.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.493 --rc genhtml_branch_coverage=1 00:38:44.493 --rc genhtml_function_coverage=1 00:38:44.493 --rc genhtml_legend=1 00:38:44.493 --rc geninfo_all_blocks=1 00:38:44.493 --rc geninfo_unexecuted_blocks=1 00:38:44.493 00:38:44.493 ' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:44.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.493 --rc genhtml_branch_coverage=1 00:38:44.493 --rc genhtml_function_coverage=1 00:38:44.493 --rc genhtml_legend=1 00:38:44.493 --rc geninfo_all_blocks=1 00:38:44.493 --rc geninfo_unexecuted_blocks=1 00:38:44.493 00:38:44.493 ' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:44.493 ************************************ 00:38:44.493 START TEST nvmf_abort 00:38:44.493 ************************************ 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:44.493 * Looking for test storage... 00:38:44.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:44.493 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:44.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.494 --rc genhtml_branch_coverage=1 00:38:44.494 --rc genhtml_function_coverage=1 00:38:44.494 --rc genhtml_legend=1 00:38:44.494 --rc geninfo_all_blocks=1 00:38:44.494 --rc geninfo_unexecuted_blocks=1 00:38:44.494 00:38:44.494 ' 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:44.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.494 --rc genhtml_branch_coverage=1 00:38:44.494 --rc genhtml_function_coverage=1 00:38:44.494 --rc genhtml_legend=1 00:38:44.494 --rc geninfo_all_blocks=1 00:38:44.494 --rc geninfo_unexecuted_blocks=1 00:38:44.494 00:38:44.494 ' 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:44.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.494 --rc genhtml_branch_coverage=1 00:38:44.494 --rc genhtml_function_coverage=1 00:38:44.494 --rc genhtml_legend=1 00:38:44.494 --rc geninfo_all_blocks=1 00:38:44.494 --rc geninfo_unexecuted_blocks=1 00:38:44.494 00:38:44.494 ' 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:44.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.494 --rc genhtml_branch_coverage=1 00:38:44.494 --rc genhtml_function_coverage=1 00:38:44.494 --rc genhtml_legend=1 00:38:44.494 --rc geninfo_all_blocks=1 00:38:44.494 --rc geninfo_unexecuted_blocks=1 00:38:44.494 00:38:44.494 ' 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:44.494 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:44.756 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:44.757 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:44.757 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:44.757 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:44.757 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:44.757 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:44.757 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:44.757 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:44.757 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:44.757 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:38:44.757 11:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:52.900 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:52.900 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:52.900 Found net devices under 0000:31:00.0: cvl_0_0 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:52.900 Found net devices under 0000:31:00.1: cvl_0_1 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:52.900 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:52.900 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:52.900 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:52.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:52.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:38:52.901 00:38:52.901 --- 10.0.0.2 ping statistics --- 00:38:52.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.901 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:52.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:52.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:38:52.901 00:38:52.901 --- 10.0.0.1 ping statistics --- 00:38:52.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.901 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2796465 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2796465 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2796465 ']' 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:52.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:52.901 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:52.901 [2024-12-07 11:50:51.319356] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:52.901 [2024-12-07 11:50:51.322041] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:38:52.901 [2024-12-07 11:50:51.322148] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:52.901 [2024-12-07 11:50:51.476054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:52.901 [2024-12-07 11:50:51.599323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:52.901 [2024-12-07 11:50:51.599389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:52.901 [2024-12-07 11:50:51.599404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:52.901 [2024-12-07 11:50:51.599416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:52.901 [2024-12-07 11:50:51.599428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:52.901 [2024-12-07 11:50:51.602275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:52.901 [2024-12-07 11:50:51.602536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:52.901 [2024-12-07 11:50:51.602562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:52.901 [2024-12-07 11:50:51.882045] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:52.901 [2024-12-07 11:50:51.883324] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:52.901 [2024-12-07 11:50:51.883353] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:52.901 [2024-12-07 11:50:51.883686] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:52.901 [2024-12-07 11:50:52.119928] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:52.901 Malloc0 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:52.901 Delay0 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.901 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:53.162 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.162 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:53.162 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.162 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:53.162 [2024-12-07 11:50:52.263857] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:53.162 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.162 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:53.162 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.162 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:53.162 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.162 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:53.162 [2024-12-07 11:50:52.426108] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:55.708 Initializing NVMe Controllers 00:38:55.708 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:55.708 controller IO queue size 128 less than required 00:38:55.708 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:55.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:55.708 Initialization complete. Launching workers. 00:38:55.708 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27407 00:38:55.708 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27464, failed to submit 66 00:38:55.708 success 27407, unsuccessful 57, failed 0 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:55.708 rmmod nvme_tcp 00:38:55.708 rmmod nvme_fabrics 00:38:55.708 rmmod nvme_keyring 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2796465 ']' 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2796465 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2796465 ']' 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2796465 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2796465 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2796465' 00:38:55.708 killing process with pid 2796465 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2796465 00:38:55.708 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2796465 00:38:56.277 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:56.277 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:56.277 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:56.277 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:56.277 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:38:56.277 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:56.277 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:38:56.277 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:56.277 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:56.277 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.277 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:56.277 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:58.821 00:38:58.821 real 0m13.943s 00:38:58.821 user 0m12.154s 00:38:58.821 sys 0m6.900s 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:58.821 ************************************ 00:38:58.821 END TEST nvmf_abort 00:38:58.821 ************************************ 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:58.821 ************************************ 00:38:58.821 START TEST nvmf_ns_hotplug_stress 00:38:58.821 ************************************ 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:58.821 * Looking for test storage... 00:38:58.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:58.821 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:58.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.821 --rc genhtml_branch_coverage=1 00:38:58.821 --rc genhtml_function_coverage=1 00:38:58.821 --rc genhtml_legend=1 00:38:58.821 --rc geninfo_all_blocks=1 00:38:58.821 --rc geninfo_unexecuted_blocks=1 00:38:58.821 00:38:58.821 ' 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:58.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.822 --rc genhtml_branch_coverage=1 00:38:58.822 --rc genhtml_function_coverage=1 00:38:58.822 --rc genhtml_legend=1 00:38:58.822 --rc geninfo_all_blocks=1 00:38:58.822 --rc geninfo_unexecuted_blocks=1 00:38:58.822 00:38:58.822 ' 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:58.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.822 --rc genhtml_branch_coverage=1 00:38:58.822 --rc genhtml_function_coverage=1 00:38:58.822 --rc genhtml_legend=1 00:38:58.822 --rc geninfo_all_blocks=1 00:38:58.822 --rc geninfo_unexecuted_blocks=1 00:38:58.822 00:38:58.822 ' 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:58.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.822 --rc genhtml_branch_coverage=1 00:38:58.822 --rc genhtml_function_coverage=1 00:38:58.822 --rc genhtml_legend=1 00:38:58.822 --rc geninfo_all_blocks=1 00:38:58.822 --rc geninfo_unexecuted_blocks=1 00:38:58.822 00:38:58.822 ' 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:58.822 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:58.823 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:58.823 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:58.823 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:58.823 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:58.823 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.823 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:58.823 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.823 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:58.823 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:58.823 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:58.823 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:06.968 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:06.968 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:06.968 Found net devices under 0000:31:00.0: cvl_0_0 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:06.968 Found net devices under 0000:31:00.1: cvl_0_1 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:06.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:06.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:39:06.968 00:39:06.968 --- 10.0.0.2 ping statistics --- 00:39:06.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.968 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:06.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:06.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:39:06.968 00:39:06.968 --- 10.0.0.1 ping statistics --- 00:39:06.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.968 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:39:06.968 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:06.969 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:06.969 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:06.969 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2801430 00:39:06.969 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2801430 00:39:06.969 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:06.969 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2801430 ']' 00:39:06.969 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:06.969 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:06.969 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:06.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:06.969 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:06.969 11:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:06.969 [2024-12-07 11:51:05.543115] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:06.969 [2024-12-07 11:51:05.545772] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:39:06.969 [2024-12-07 11:51:05.545873] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:06.969 [2024-12-07 11:51:05.712960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:06.969 [2024-12-07 11:51:05.842449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:06.969 [2024-12-07 11:51:05.842514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:06.969 [2024-12-07 11:51:05.842536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:06.969 [2024-12-07 11:51:05.842547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:06.969 [2024-12-07 11:51:05.842559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:06.969 [2024-12-07 11:51:05.845279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:06.969 [2024-12-07 11:51:05.845542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:06.969 [2024-12-07 11:51:05.845569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:06.969 [2024-12-07 11:51:06.126232] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:06.969 [2024-12-07 11:51:06.127558] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:06.969 [2024-12-07 11:51:06.127734] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:06.969 [2024-12-07 11:51:06.128065] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:06.969 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:06.969 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:39:06.969 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:06.969 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:06.969 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:07.228 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:07.228 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:39:07.228 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:07.228 [2024-12-07 11:51:06.494904] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:07.228 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:07.488 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:07.748 [2024-12-07 11:51:06.839568] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:07.748 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:07.748 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:39:08.019 Malloc0 00:39:08.019 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:08.019 Delay0 00:39:08.280 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.280 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:39:08.540 NULL1 00:39:08.540 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:39:08.800 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2802075 00:39:08.800 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:39:08.800 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:08.800 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.800 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.059 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:39:09.059 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:39:09.319 true 00:39:09.319 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:09.319 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.319 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.578 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:39:09.578 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:39:09.839 true 00:39:09.839 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:09.839 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.099 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.099 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:39:10.099 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:39:10.359 true 00:39:10.359 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:10.359 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.620 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.880 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:39:10.880 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:39:10.880 true 00:39:10.880 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:10.880 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.139 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.400 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:39:11.400 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:39:11.400 true 00:39:11.400 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:11.400 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.661 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.922 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:39:11.923 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:39:12.185 true 00:39:12.185 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:12.185 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.185 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.445 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:39:12.446 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:39:12.706 true 00:39:12.706 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:12.706 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.706 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.967 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:39:12.967 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:39:13.228 true 00:39:13.228 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:13.228 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.489 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.490 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:39:13.490 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:39:13.752 true 00:39:13.752 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:13.752 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.013 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.013 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:39:14.013 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:39:14.274 true 00:39:14.274 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:14.274 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.535 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.797 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:39:14.797 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:39:14.797 true 00:39:14.797 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:14.797 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.058 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.319 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:39:15.319 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:39:15.319 true 00:39:15.319 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:15.319 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.581 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.843 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:39:15.843 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:39:15.843 true 00:39:15.843 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:15.843 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.104 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.365 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:39:16.365 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:39:16.625 true 00:39:16.625 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:16.626 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.626 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.887 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:39:16.887 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:39:17.149 true 00:39:17.149 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:17.149 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.410 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.410 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:39:17.410 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:39:17.671 true 00:39:17.671 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:17.671 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.931 11:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.931 11:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:39:17.931 11:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:39:18.191 true 00:39:18.191 11:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:18.191 11:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.452 11:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.452 11:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:39:18.452 11:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:39:18.713 true 00:39:18.713 11:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:18.713 11:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.974 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.234 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:39:19.234 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:39:19.234 true 00:39:19.234 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:19.234 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.495 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.755 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:39:19.755 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:39:19.755 true 00:39:19.755 11:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:19.755 11:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.016 11:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.277 11:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:39:20.277 11:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:39:20.539 true 00:39:20.539 11:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:20.539 11:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.539 11:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.800 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:39:20.800 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:39:21.062 true 00:39:21.062 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:21.062 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.062 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.323 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:39:21.323 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:39:21.585 true 00:39:21.585 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:21.585 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.846 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.846 11:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:39:21.846 11:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:39:22.109 true 00:39:22.109 11:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:22.109 11:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.371 11:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.371 11:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:39:22.371 11:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:39:22.633 true 00:39:22.633 11:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:22.633 11:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.893 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.893 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:39:22.893 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:39:23.153 true 00:39:23.153 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:23.153 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.413 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.674 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:39:23.674 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:39:23.674 true 00:39:23.674 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:23.674 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.934 11:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.196 11:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:39:24.196 11:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:39:24.196 true 00:39:24.196 11:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:24.196 11:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.457 11:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.735 11:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:39:24.735 11:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:39:24.735 true 00:39:24.996 11:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:24.996 11:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.996 11:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.256 11:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:39:25.256 11:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:39:25.517 true 00:39:25.517 11:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:25.517 11:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.517 11:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.777 11:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:39:25.777 11:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:39:26.037 true 00:39:26.037 11:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:26.037 11:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.037 11:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.298 11:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:39:26.298 11:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:39:26.559 true 00:39:26.559 11:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:26.559 11:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.820 11:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.820 11:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:39:26.820 11:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:39:27.080 true 00:39:27.080 11:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:27.080 11:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.342 11:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:27.603 11:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:39:27.603 11:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:39:27.603 true 00:39:27.603 11:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:27.603 11:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.865 11:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.128 11:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:39:28.128 11:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:39:28.128 true 00:39:28.128 11:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:28.128 11:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.390 11:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.651 11:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:39:28.651 11:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:39:28.651 true 00:39:28.651 11:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:28.651 11:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.912 11:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:29.174 11:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:39:29.174 11:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:39:29.174 true 00:39:29.435 11:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:29.435 11:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:29.435 11:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:29.696 11:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:39:29.696 11:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:39:29.957 true 00:39:29.957 11:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:29.957 11:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:29.957 11:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:30.219 11:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:39:30.219 11:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:39:30.480 true 00:39:30.480 11:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:30.480 11:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.480 11:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:30.740 11:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:39:30.740 11:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:39:31.001 true 00:39:31.001 11:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:31.001 11:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.264 11:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:31.264 11:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:39:31.264 11:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:39:31.526 true 00:39:31.526 11:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:31.526 11:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.787 11:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:31.787 11:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:39:31.787 11:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:39:32.049 true 00:39:32.049 11:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:32.049 11:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.310 11:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:32.571 11:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:39:32.571 11:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:39:32.571 true 00:39:32.571 11:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:32.571 11:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.914 11:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:33.198 11:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:39:33.198 11:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:39:33.198 true 00:39:33.198 11:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:33.198 11:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.511 11:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:33.512 11:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:39:33.512 11:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:39:33.775 true 00:39:33.775 11:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:33.775 11:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:34.036 11:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:34.036 11:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:39:34.036 11:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:39:34.297 true 00:39:34.297 11:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:34.297 11:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:34.558 11:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:34.558 11:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:39:34.558 11:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:39:34.819 true 00:39:34.819 11:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:34.819 11:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:35.080 11:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:35.342 11:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:39:35.342 11:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:39:35.342 true 00:39:35.342 11:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:35.342 11:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:35.604 11:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:35.867 11:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:39:35.867 11:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:39:35.867 true 00:39:35.867 11:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:35.867 11:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:36.129 11:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:36.389 11:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:39:36.389 11:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:39:36.389 true 00:39:36.649 11:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:36.649 11:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:36.649 11:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:36.909 11:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:39:36.909 11:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:39:37.168 true 00:39:37.169 11:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:37.169 11:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:37.169 11:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:37.428 11:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:39:37.428 11:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:39:37.687 true 00:39:37.687 11:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:37.687 11:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:37.947 11:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:37.947 11:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:39:37.947 11:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:39:38.205 true 00:39:38.205 11:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:38.205 11:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:38.465 11:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:38.465 11:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:39:38.465 11:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:39:38.724 true 00:39:38.724 11:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:38.724 11:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:38.984 11:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:38.984 Initializing NVMe Controllers 00:39:38.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:38.984 Controller IO queue size 128, less than required. 00:39:38.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:38.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:38.984 Initialization complete. Launching workers. 00:39:38.984 ======================================================== 00:39:38.984 Latency(us) 00:39:38.984 Device Information : IOPS MiB/s Average min max 00:39:38.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27181.63 13.27 4709.00 1736.50 12302.94 00:39:38.984 ======================================================== 00:39:38.984 Total : 27181.63 13.27 4709.00 1736.50 12302.94 00:39:38.984 00:39:38.984 [2024-12-07 11:51:38.262145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(6) to be set 00:39:39.244 11:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:39:39.244 11:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:39:39.244 true 00:39:39.244 11:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2802075 00:39:39.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2802075) - No such process 00:39:39.244 11:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2802075 00:39:39.244 11:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:39.503 11:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:39.763 11:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:39:39.763 11:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:39:39.763 11:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:39:39.763 11:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:39.763 11:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:39:39.763 null0 00:39:39.763 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:39.763 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:39.763 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:39:40.023 null1 00:39:40.023 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:40.023 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:40.023 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:39:40.281 null2 00:39:40.281 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:40.281 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:40.281 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:39:40.281 null3 00:39:40.281 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:40.281 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:40.282 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:39:40.545 null4 00:39:40.545 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:40.545 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:40.545 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:39:40.805 null5 00:39:40.805 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:40.805 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:40.805 11:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:39:40.805 null6 00:39:40.805 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:40.805 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:40.805 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:39:41.065 null7 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:39:41.065 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:39:41.066 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:41.066 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.066 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:41.066 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:41.066 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:41.066 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:41.066 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2808708 2808710 2808712 2808716 2808719 2808720 2808723 2808725 00:39:41.066 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:39:41.066 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:39:41.066 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:41.066 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.066 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:41.325 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:41.325 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:41.325 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:41.325 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:41.325 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:41.325 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:41.325 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:41.325 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:41.325 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.325 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.325 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:41.584 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:41.585 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:41.585 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:41.845 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:41.845 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:41.845 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:41.845 11:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:41.845 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:42.105 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:42.105 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:42.105 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:42.105 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:42.105 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:42.105 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:42.105 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:42.105 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.105 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.105 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:42.364 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.624 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:42.883 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.883 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.883 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:42.883 11:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.883 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:43.142 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.401 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.662 11:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:43.922 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:43.923 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:43.923 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:43.923 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.923 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.923 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:43.923 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:43.923 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.923 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.923 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:44.183 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.183 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.183 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:44.183 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.183 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.183 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:44.183 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.183 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.183 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:44.183 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.183 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.183 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:44.183 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:44.184 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.184 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.184 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:44.184 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:44.184 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.184 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.184 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:44.184 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:44.445 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:44.445 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:44.445 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:44.445 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.445 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.445 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.446 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:44.708 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:44.708 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:44.708 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.708 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.708 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:44.708 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:44.708 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.708 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.708 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:44.708 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:44.708 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:44.708 11:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:44.708 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.708 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.708 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.708 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.708 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:44.708 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.708 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.708 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:44.708 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:44.969 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.969 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.969 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.969 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.969 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.969 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.969 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.969 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.969 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:44.969 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:44.969 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:45.230 rmmod nvme_tcp 00:39:45.230 rmmod nvme_fabrics 00:39:45.230 rmmod nvme_keyring 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2801430 ']' 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2801430 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2801430 ']' 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2801430 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2801430 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2801430' 00:39:45.230 killing process with pid 2801430 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2801430 00:39:45.230 11:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2801430 00:39:45.801 11:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:45.801 11:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:45.801 11:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:45.801 11:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:39:46.061 11:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:39:46.061 11:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:46.061 11:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:39:46.061 11:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:46.061 11:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:46.061 11:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:46.061 11:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:46.061 11:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:47.973 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:47.973 00:39:47.973 real 0m49.579s 00:39:47.973 user 3m5.789s 00:39:47.973 sys 0m21.825s 00:39:47.973 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:47.973 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:47.973 ************************************ 00:39:47.973 END TEST nvmf_ns_hotplug_stress 00:39:47.973 ************************************ 00:39:47.973 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:47.973 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:47.973 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:47.973 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:47.973 ************************************ 00:39:47.973 START TEST nvmf_delete_subsystem 00:39:47.973 ************************************ 00:39:47.973 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:48.235 * Looking for test storage... 00:39:48.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:48.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.235 --rc genhtml_branch_coverage=1 00:39:48.235 --rc genhtml_function_coverage=1 00:39:48.235 --rc genhtml_legend=1 00:39:48.235 --rc geninfo_all_blocks=1 00:39:48.235 --rc geninfo_unexecuted_blocks=1 00:39:48.235 00:39:48.235 ' 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:48.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.235 --rc genhtml_branch_coverage=1 00:39:48.235 --rc genhtml_function_coverage=1 00:39:48.235 --rc genhtml_legend=1 00:39:48.235 --rc geninfo_all_blocks=1 00:39:48.235 --rc geninfo_unexecuted_blocks=1 00:39:48.235 00:39:48.235 ' 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:48.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.235 --rc genhtml_branch_coverage=1 00:39:48.235 --rc genhtml_function_coverage=1 00:39:48.235 --rc genhtml_legend=1 00:39:48.235 --rc geninfo_all_blocks=1 00:39:48.235 --rc geninfo_unexecuted_blocks=1 00:39:48.235 00:39:48.235 ' 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:48.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.235 --rc genhtml_branch_coverage=1 00:39:48.235 --rc genhtml_function_coverage=1 00:39:48.235 --rc genhtml_legend=1 00:39:48.235 --rc geninfo_all_blocks=1 00:39:48.235 --rc geninfo_unexecuted_blocks=1 00:39:48.235 00:39:48.235 ' 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:48.235 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:39:48.236 11:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:56.399 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:56.399 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:56.399 Found net devices under 0000:31:00.0: cvl_0_0 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:56.399 Found net devices under 0000:31:00.1: cvl_0_1 00:39:56.399 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:56.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:56.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:39:56.400 00:39:56.400 --- 10.0.0.2 ping statistics --- 00:39:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.400 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:56.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:56.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:39:56.400 00:39:56.400 --- 10.0.0.1 ping statistics --- 00:39:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.400 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2813937 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2813937 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2813937 ']' 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:56.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:56.400 11:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.400 [2024-12-07 11:51:55.032651] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:56.400 [2024-12-07 11:51:55.034897] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:39:56.400 [2024-12-07 11:51:55.034979] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:56.400 [2024-12-07 11:51:55.170641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:56.400 [2024-12-07 11:51:55.266849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:56.400 [2024-12-07 11:51:55.266893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:56.400 [2024-12-07 11:51:55.266910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:56.400 [2024-12-07 11:51:55.266920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:56.400 [2024-12-07 11:51:55.266931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:56.400 [2024-12-07 11:51:55.268787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:56.400 [2024-12-07 11:51:55.268813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:56.400 [2024-12-07 11:51:55.512354] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:56.400 [2024-12-07 11:51:55.512445] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:56.400 [2024-12-07 11:51:55.512644] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.661 [2024-12-07 11:51:55.853554] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.661 [2024-12-07 11:51:55.885947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.661 NULL1 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.661 Delay0 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2813997 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:39:56.661 11:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:56.924 [2024-12-07 11:51:56.036880] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:58.836 11:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:58.836 11:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.836 11:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 starting I/O failed: -6 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 starting I/O failed: -6 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 starting I/O failed: -6 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 starting I/O failed: -6 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 starting I/O failed: -6 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 starting I/O failed: -6 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 starting I/O failed: -6 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 starting I/O failed: -6 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 starting I/O failed: -6 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 starting I/O failed: -6 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 starting I/O failed: -6 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 [2024-12-07 11:51:58.121378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026280 is same with the state(6) to be set 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Write completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.836 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 [2024-12-07 11:51:58.121840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026780 is same with the state(6) to be set 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 starting I/O failed: -6 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 starting I/O failed: -6 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 starting I/O failed: -6 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 starting I/O failed: -6 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 starting I/O failed: -6 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 starting I/O failed: -6 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 starting I/O failed: -6 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 starting I/O failed: -6 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 starting I/O failed: -6 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 starting I/O failed: -6 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 starting I/O failed: -6 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 [2024-12-07 11:51:58.122635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030000 is same with the state(6) to be set 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 Write completed with error (sct=0, sc=8) 00:39:58.837 Read completed with error (sct=0, sc=8) 00:39:58.837 [2024-12-07 11:51:58.123108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030500 is same with the state(6) to be set 00:39:59.778 [2024-12-07 11:51:59.099968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025b00 is same with the state(6) to be set 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 [2024-12-07 11:51:59.125614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026500 is same with the state(6) to be set 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 [2024-12-07 11:51:59.126136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026a00 is same with the state(6) to be set 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 [2024-12-07 11:51:59.126871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030280 is same with the state(6) to be set 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Read completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 Write completed with error (sct=0, sc=8) 00:39:59.778 [2024-12-07 11:51:59.128986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030780 is same with the state(6) to be set 00:40:00.039 Initializing NVMe Controllers 00:40:00.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:00.039 Controller IO queue size 128, less than required. 00:40:00.039 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:00.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:00.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:00.039 Initialization complete. Launching workers. 00:40:00.039 ======================================================== 00:40:00.039 Latency(us) 00:40:00.039 Device Information : IOPS MiB/s Average min max 00:40:00.039 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.41 0.08 892008.00 471.61 1011692.03 00:40:00.039 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.46 0.08 908856.36 497.68 1011936.63 00:40:00.039 ======================================================== 00:40:00.039 Total : 335.87 0.16 900257.71 471.61 1011936.63 00:40:00.039 00:40:00.039 [2024-12-07 11:51:59.130110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000025b00 (9): Bad file descriptor 00:40:00.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:40:00.039 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.039 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:40:00.039 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2813997 00:40:00.039 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:40:00.301 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:40:00.301 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2813997 00:40:00.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2813997) - No such process 00:40:00.301 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2813997 00:40:00.301 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:40:00.302 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2813997 00:40:00.302 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:40:00.302 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:00.302 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:40:00.302 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:00.302 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2813997 00:40:00.302 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:40:00.302 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:00.302 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:00.302 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:00.302 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:00.302 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.302 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:00.563 [2024-12-07 11:51:59.661761] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2814737 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2814737 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:40:00.563 11:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:00.563 [2024-12-07 11:51:59.772164] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:40:01.135 11:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:01.135 11:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2814737 00:40:01.135 11:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:01.395 11:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:01.395 11:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2814737 00:40:01.395 11:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:01.965 11:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:01.965 11:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2814737 00:40:01.965 11:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:02.534 11:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:02.534 11:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2814737 00:40:02.534 11:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:03.103 11:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:03.103 11:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2814737 00:40:03.104 11:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:03.363 11:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:03.363 11:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2814737 00:40:03.363 11:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:03.623 Initializing NVMe Controllers 00:40:03.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:03.623 Controller IO queue size 128, less than required. 00:40:03.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:03.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:03.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:03.623 Initialization complete. Launching workers. 00:40:03.623 ======================================================== 00:40:03.623 Latency(us) 00:40:03.623 Device Information : IOPS MiB/s Average min max 00:40:03.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003598.72 1000277.16 1008233.27 00:40:03.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004765.24 1000716.33 1010907.83 00:40:03.623 ======================================================== 00:40:03.623 Total : 256.00 0.12 1004181.98 1000277.16 1010907.83 00:40:03.623 00:40:03.884 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:03.884 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2814737 00:40:03.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2814737) - No such process 00:40:03.884 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2814737 00:40:03.884 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:40:03.884 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:40:03.884 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:03.884 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:40:03.884 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:03.884 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:40:03.884 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:03.884 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:03.884 rmmod nvme_tcp 00:40:04.145 rmmod nvme_fabrics 00:40:04.145 rmmod nvme_keyring 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2813937 ']' 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2813937 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2813937 ']' 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2813937 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2813937 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2813937' 00:40:04.145 killing process with pid 2813937 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2813937 00:40:04.145 11:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2813937 00:40:05.086 11:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:05.086 11:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:05.086 11:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:05.086 11:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:40:05.086 11:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:40:05.086 11:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:05.086 11:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:40:05.086 11:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:05.086 11:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:05.086 11:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:05.086 11:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:05.086 11:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:06.995 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:06.995 00:40:06.995 real 0m18.918s 00:40:06.995 user 0m27.346s 00:40:06.995 sys 0m7.528s 00:40:06.995 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:06.995 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:06.995 ************************************ 00:40:06.995 END TEST nvmf_delete_subsystem 00:40:06.995 ************************************ 00:40:06.995 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:06.995 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:06.995 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:06.995 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:06.995 ************************************ 00:40:06.995 START TEST nvmf_host_management 00:40:06.995 ************************************ 00:40:06.995 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:07.256 * Looking for test storage... 00:40:07.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:07.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:07.256 --rc genhtml_branch_coverage=1 00:40:07.256 --rc genhtml_function_coverage=1 00:40:07.256 --rc genhtml_legend=1 00:40:07.256 --rc geninfo_all_blocks=1 00:40:07.256 --rc geninfo_unexecuted_blocks=1 00:40:07.256 00:40:07.256 ' 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:07.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:07.256 --rc genhtml_branch_coverage=1 00:40:07.256 --rc genhtml_function_coverage=1 00:40:07.256 --rc genhtml_legend=1 00:40:07.256 --rc geninfo_all_blocks=1 00:40:07.256 --rc geninfo_unexecuted_blocks=1 00:40:07.256 00:40:07.256 ' 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:07.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:07.256 --rc genhtml_branch_coverage=1 00:40:07.256 --rc genhtml_function_coverage=1 00:40:07.256 --rc genhtml_legend=1 00:40:07.256 --rc geninfo_all_blocks=1 00:40:07.256 --rc geninfo_unexecuted_blocks=1 00:40:07.256 00:40:07.256 ' 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:07.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:07.256 --rc genhtml_branch_coverage=1 00:40:07.256 --rc genhtml_function_coverage=1 00:40:07.256 --rc genhtml_legend=1 00:40:07.256 --rc geninfo_all_blocks=1 00:40:07.256 --rc geninfo_unexecuted_blocks=1 00:40:07.256 00:40:07.256 ' 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:07.256 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:40:07.257 11:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:15.393 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:15.393 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:40:15.393 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:15.393 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:15.393 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:15.394 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:15.394 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:15.394 Found net devices under 0000:31:00.0: cvl_0_0 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:15.394 Found net devices under 0000:31:00.1: cvl_0_1 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:15.394 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:15.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:15.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:40:15.395 00:40:15.395 --- 10.0.0.2 ping statistics --- 00:40:15.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.395 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:15.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:15.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:40:15.395 00:40:15.395 --- 10.0.0.1 ping statistics --- 00:40:15.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.395 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2819711 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2819711 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2819711 ']' 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:15.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:15.395 11:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:15.395 [2024-12-07 11:52:13.761911] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:15.395 [2024-12-07 11:52:13.764259] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:40:15.395 [2024-12-07 11:52:13.764345] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:15.395 [2024-12-07 11:52:13.925194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:15.395 [2024-12-07 11:52:14.057524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:15.395 [2024-12-07 11:52:14.057585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:15.395 [2024-12-07 11:52:14.057599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:15.395 [2024-12-07 11:52:14.057610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:15.395 [2024-12-07 11:52:14.057622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:15.395 [2024-12-07 11:52:14.060426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:15.395 [2024-12-07 11:52:14.060619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:15.395 [2024-12-07 11:52:14.060727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:15.395 [2024-12-07 11:52:14.060758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:15.395 [2024-12-07 11:52:14.358893] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:15.395 [2024-12-07 11:52:14.360344] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:15.395 [2024-12-07 11:52:14.361294] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:15.395 [2024-12-07 11:52:14.361308] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:15.395 [2024-12-07 11:52:14.361718] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:15.395 [2024-12-07 11:52:14.561984] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:15.395 Malloc0 00:40:15.395 [2024-12-07 11:52:14.693826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:15.395 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2820071 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2820071 /var/tmp/bdevperf.sock 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2820071 ']' 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:15.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:15.655 { 00:40:15.655 "params": { 00:40:15.655 "name": "Nvme$subsystem", 00:40:15.655 "trtype": "$TEST_TRANSPORT", 00:40:15.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:15.655 "adrfam": "ipv4", 00:40:15.655 "trsvcid": "$NVMF_PORT", 00:40:15.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:15.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:15.655 "hdgst": ${hdgst:-false}, 00:40:15.655 "ddgst": ${ddgst:-false} 00:40:15.655 }, 00:40:15.655 "method": "bdev_nvme_attach_controller" 00:40:15.655 } 00:40:15.655 EOF 00:40:15.655 )") 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:15.655 11:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:15.655 "params": { 00:40:15.655 "name": "Nvme0", 00:40:15.655 "trtype": "tcp", 00:40:15.655 "traddr": "10.0.0.2", 00:40:15.655 "adrfam": "ipv4", 00:40:15.655 "trsvcid": "4420", 00:40:15.655 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:15.655 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:15.655 "hdgst": false, 00:40:15.655 "ddgst": false 00:40:15.655 }, 00:40:15.655 "method": "bdev_nvme_attach_controller" 00:40:15.655 }' 00:40:15.656 [2024-12-07 11:52:14.826259] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:40:15.656 [2024-12-07 11:52:14.826363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820071 ] 00:40:15.656 [2024-12-07 11:52:14.951528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:15.918 [2024-12-07 11:52:15.047618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:16.490 Running I/O for 10 seconds... 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:40:16.490 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.753 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.753 [2024-12-07 11:52:15.968821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.968882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.968894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.968904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.968913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.968923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.968932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.968942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.968953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.968962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.968971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.968980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.968989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.968999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.753 [2024-12-07 11:52:15.969439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.754 [2024-12-07 11:52:15.969448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:16.754 [2024-12-07 11:52:15.969654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.969711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.969739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.969752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.969766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.969781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.969795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.969806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.969819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.969830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.969843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.969854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.969866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.969877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.969889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.969900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.969913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.969923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.969937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.969947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.969960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.969970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.969983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.969994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.754 [2024-12-07 11:52:15.970605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.754 [2024-12-07 11:52:15.970624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.754 [2024-12-07 11:52:15.970636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.970976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.970988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:16.755 [2024-12-07 11:52:15.970998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.971016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.971027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.971040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.971050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.971062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.971073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.971085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.971096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.971109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.971119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.971132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.971142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.971155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.971165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.971178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.971188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.971200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.971211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.971223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:16.755 [2024-12-07 11:52:15.971234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.755 [2024-12-07 11:52:15.971246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039ec00 is same with the state(6) to be set 00:40:16.755 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.755 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.755 [2024-12-07 11:52:15.972751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:16.755 task offset: 73728 on job bdev=Nvme0n1 fails 00:40:16.755 00:40:16.755 Latency(us) 00:40:16.755 [2024-12-07T10:52:16.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:16.755 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:16.755 Job: Nvme0n1 ended in about 0.43 seconds with error 00:40:16.755 Verification LBA range: start 0x0 length 0x400 00:40:16.755 Nvme0n1 : 0.43 1333.30 83.33 148.14 0.00 41914.82 5324.80 39540.05 00:40:16.755 [2024-12-07T10:52:16.109Z] =================================================================================================================== 00:40:16.755 [2024-12-07T10:52:16.109Z] Total : 1333.30 83.33 148.14 0.00 41914.82 5324.80 39540.05 00:40:16.755 [2024-12-07 11:52:15.976963] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:16.755 [2024-12-07 11:52:15.977008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:16.755 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.755 11:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:40:16.755 [2024-12-07 11:52:16.029805] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:40:17.696 11:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2820071 00:40:17.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2820071) - No such process 00:40:17.696 11:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:40:17.696 11:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:40:17.696 11:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:40:17.696 11:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:40:17.696 11:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:17.696 11:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:17.696 11:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:17.696 11:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:17.696 { 00:40:17.696 "params": { 00:40:17.696 "name": "Nvme$subsystem", 00:40:17.696 "trtype": "$TEST_TRANSPORT", 00:40:17.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:17.696 "adrfam": "ipv4", 00:40:17.696 "trsvcid": "$NVMF_PORT", 00:40:17.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:17.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:17.697 "hdgst": ${hdgst:-false}, 00:40:17.697 "ddgst": ${ddgst:-false} 00:40:17.697 }, 00:40:17.697 "method": "bdev_nvme_attach_controller" 00:40:17.697 } 00:40:17.697 EOF 00:40:17.697 )") 00:40:17.697 11:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:17.697 11:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:17.697 11:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:17.697 11:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:17.697 "params": { 00:40:17.697 "name": "Nvme0", 00:40:17.697 "trtype": "tcp", 00:40:17.697 "traddr": "10.0.0.2", 00:40:17.697 "adrfam": "ipv4", 00:40:17.697 "trsvcid": "4420", 00:40:17.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:17.697 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:17.697 "hdgst": false, 00:40:17.697 "ddgst": false 00:40:17.697 }, 00:40:17.697 "method": "bdev_nvme_attach_controller" 00:40:17.697 }' 00:40:17.957 [2024-12-07 11:52:17.071505] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:40:17.957 [2024-12-07 11:52:17.071602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820446 ] 00:40:17.957 [2024-12-07 11:52:17.196901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.957 [2024-12-07 11:52:17.293415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.528 Running I/O for 1 seconds... 00:40:19.470 1505.00 IOPS, 94.06 MiB/s 00:40:19.470 Latency(us) 00:40:19.470 [2024-12-07T10:52:18.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:19.470 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:19.470 Verification LBA range: start 0x0 length 0x400 00:40:19.470 Nvme0n1 : 1.03 1553.54 97.10 0.00 0.00 40356.82 2539.52 37355.52 00:40:19.470 [2024-12-07T10:52:18.824Z] =================================================================================================================== 00:40:19.470 [2024-12-07T10:52:18.824Z] Total : 1553.54 97.10 0.00 0.00 40356.82 2539.52 37355.52 00:40:20.411 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:40:20.411 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:40:20.411 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:20.411 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:20.411 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:40:20.411 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:20.411 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:40:20.411 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:20.412 rmmod nvme_tcp 00:40:20.412 rmmod nvme_fabrics 00:40:20.412 rmmod nvme_keyring 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2819711 ']' 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2819711 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2819711 ']' 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2819711 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2819711 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2819711' 00:40:20.412 killing process with pid 2819711 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2819711 00:40:20.412 11:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2819711 00:40:20.988 [2024-12-07 11:52:20.157650] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:40:20.988 11:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:20.988 11:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:20.988 11:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:20.988 11:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:40:20.988 11:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:20.988 11:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:40:20.988 11:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:40:20.988 11:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:20.988 11:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:20.988 11:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:20.988 11:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:20.988 11:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:22.959 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:22.959 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:40:22.959 00:40:22.959 real 0m15.978s 00:40:22.959 user 0m25.755s 00:40:22.959 sys 0m7.838s 00:40:22.959 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:22.959 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:22.959 ************************************ 00:40:22.959 END TEST nvmf_host_management 00:40:22.959 ************************************ 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:23.221 ************************************ 00:40:23.221 START TEST nvmf_lvol 00:40:23.221 ************************************ 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:23.221 * Looking for test storage... 00:40:23.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:23.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.221 --rc genhtml_branch_coverage=1 00:40:23.221 --rc genhtml_function_coverage=1 00:40:23.221 --rc genhtml_legend=1 00:40:23.221 --rc geninfo_all_blocks=1 00:40:23.221 --rc geninfo_unexecuted_blocks=1 00:40:23.221 00:40:23.221 ' 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:23.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.221 --rc genhtml_branch_coverage=1 00:40:23.221 --rc genhtml_function_coverage=1 00:40:23.221 --rc genhtml_legend=1 00:40:23.221 --rc geninfo_all_blocks=1 00:40:23.221 --rc geninfo_unexecuted_blocks=1 00:40:23.221 00:40:23.221 ' 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:23.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.221 --rc genhtml_branch_coverage=1 00:40:23.221 --rc genhtml_function_coverage=1 00:40:23.221 --rc genhtml_legend=1 00:40:23.221 --rc geninfo_all_blocks=1 00:40:23.221 --rc geninfo_unexecuted_blocks=1 00:40:23.221 00:40:23.221 ' 00:40:23.221 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:23.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.222 --rc genhtml_branch_coverage=1 00:40:23.222 --rc genhtml_function_coverage=1 00:40:23.222 --rc genhtml_legend=1 00:40:23.222 --rc geninfo_all_blocks=1 00:40:23.222 --rc geninfo_unexecuted_blocks=1 00:40:23.222 00:40:23.222 ' 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:40:23.222 11:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:29.808 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:29.809 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:29.809 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:29.809 Found net devices under 0000:31:00.0: cvl_0_0 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:29.809 Found net devices under 0000:31:00.1: cvl_0_1 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:29.809 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:30.070 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:30.070 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:30.070 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:30.070 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:30.070 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:30.070 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:30.331 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:30.331 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:30.331 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:30.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:30.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:40:30.331 00:40:30.331 --- 10.0.0.2 ping statistics --- 00:40:30.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:30.331 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:40:30.331 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:30.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:30.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:40:30.331 00:40:30.331 --- 10.0.0.1 ping statistics --- 00:40:30.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:30.332 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2825183 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2825183 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2825183 ']' 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:30.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:30.332 11:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:30.332 [2024-12-07 11:52:29.589323] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:30.332 [2024-12-07 11:52:29.591789] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:40:30.332 [2024-12-07 11:52:29.591877] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:30.593 [2024-12-07 11:52:29.739872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:30.593 [2024-12-07 11:52:29.839324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:30.593 [2024-12-07 11:52:29.839367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:30.593 [2024-12-07 11:52:29.839381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:30.593 [2024-12-07 11:52:29.839391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:30.593 [2024-12-07 11:52:29.839401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:30.593 [2024-12-07 11:52:29.841435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:30.593 [2024-12-07 11:52:29.841515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.593 [2024-12-07 11:52:29.841520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:30.854 [2024-12-07 11:52:30.086196] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:30.854 [2024-12-07 11:52:30.086456] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:30.854 [2024-12-07 11:52:30.086865] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:30.854 [2024-12-07 11:52:30.087126] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:31.116 11:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:31.116 11:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:40:31.116 11:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:31.116 11:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:31.116 11:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:31.116 11:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:31.116 11:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:31.378 [2024-12-07 11:52:30.538664] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:31.378 11:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:31.639 11:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:40:31.639 11:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:31.900 11:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:40:31.900 11:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:40:31.900 11:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:40:32.162 11:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fe5613a8-d02e-4760-8f27-777b4d36d251 00:40:32.162 11:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fe5613a8-d02e-4760-8f27-777b4d36d251 lvol 20 00:40:32.422 11:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=aa667e45-754b-42dd-b3b0-8058c0c061dd 00:40:32.422 11:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:32.422 11:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aa667e45-754b-42dd-b3b0-8058c0c061dd 00:40:32.683 11:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:32.944 [2024-12-07 11:52:32.054469] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:32.944 11:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:32.944 11:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2825620 00:40:32.944 11:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:40:32.944 11:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:40:34.332 11:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot aa667e45-754b-42dd-b3b0-8058c0c061dd MY_SNAPSHOT 00:40:34.332 11:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ea095ee6-453d-4d7b-b5ce-cb7c8d74372c 00:40:34.332 11:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize aa667e45-754b-42dd-b3b0-8058c0c061dd 30 00:40:34.593 11:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ea095ee6-453d-4d7b-b5ce-cb7c8d74372c MY_CLONE 00:40:34.593 11:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e64c16e1-753a-4d0c-8ed2-94ae8169cd2b 00:40:34.593 11:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e64c16e1-753a-4d0c-8ed2-94ae8169cd2b 00:40:35.165 11:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2825620 00:40:43.303 Initializing NVMe Controllers 00:40:43.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:43.303 Controller IO queue size 128, less than required. 00:40:43.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:43.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:40:43.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:40:43.303 Initialization complete. Launching workers. 00:40:43.303 ======================================================== 00:40:43.303 Latency(us) 00:40:43.303 Device Information : IOPS MiB/s Average min max 00:40:43.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15037.47 58.74 8512.94 261.51 99375.57 00:40:43.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11474.72 44.82 11160.76 4738.85 121710.60 00:40:43.303 ======================================================== 00:40:43.303 Total : 26512.19 103.56 9658.94 261.51 121710.60 00:40:43.303 00:40:43.564 11:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:43.564 11:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aa667e45-754b-42dd-b3b0-8058c0c061dd 00:40:43.825 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fe5613a8-d02e-4760-8f27-777b4d36d251 00:40:44.086 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:40:44.086 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:40:44.086 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:44.087 rmmod nvme_tcp 00:40:44.087 rmmod nvme_fabrics 00:40:44.087 rmmod nvme_keyring 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2825183 ']' 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2825183 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2825183 ']' 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2825183 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825183 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825183' 00:40:44.087 killing process with pid 2825183 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2825183 00:40:44.087 11:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2825183 00:40:45.029 11:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:45.029 11:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:45.029 11:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:45.029 11:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:40:45.291 11:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:40:45.291 11:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:45.291 11:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:40:45.291 11:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:45.291 11:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:45.291 11:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:45.291 11:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:45.291 11:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:47.222 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:47.222 00:40:47.222 real 0m24.102s 00:40:47.222 user 0m56.507s 00:40:47.222 sys 0m10.289s 00:40:47.222 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:47.222 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:47.222 ************************************ 00:40:47.222 END TEST nvmf_lvol 00:40:47.222 ************************************ 00:40:47.222 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:47.222 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:47.222 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:47.222 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:47.222 ************************************ 00:40:47.222 START TEST nvmf_lvs_grow 00:40:47.222 ************************************ 00:40:47.222 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:47.485 * Looking for test storage... 00:40:47.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:47.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.485 --rc genhtml_branch_coverage=1 00:40:47.485 --rc genhtml_function_coverage=1 00:40:47.485 --rc genhtml_legend=1 00:40:47.485 --rc geninfo_all_blocks=1 00:40:47.485 --rc geninfo_unexecuted_blocks=1 00:40:47.485 00:40:47.485 ' 00:40:47.485 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:47.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.485 --rc genhtml_branch_coverage=1 00:40:47.485 --rc genhtml_function_coverage=1 00:40:47.485 --rc genhtml_legend=1 00:40:47.485 --rc geninfo_all_blocks=1 00:40:47.485 --rc geninfo_unexecuted_blocks=1 00:40:47.485 00:40:47.485 ' 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.486 --rc genhtml_branch_coverage=1 00:40:47.486 --rc genhtml_function_coverage=1 00:40:47.486 --rc genhtml_legend=1 00:40:47.486 --rc geninfo_all_blocks=1 00:40:47.486 --rc geninfo_unexecuted_blocks=1 00:40:47.486 00:40:47.486 ' 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.486 --rc genhtml_branch_coverage=1 00:40:47.486 --rc genhtml_function_coverage=1 00:40:47.486 --rc genhtml_legend=1 00:40:47.486 --rc geninfo_all_blocks=1 00:40:47.486 --rc geninfo_unexecuted_blocks=1 00:40:47.486 00:40:47.486 ' 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:40:47.486 11:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:54.071 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:54.072 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:54.072 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:54.072 Found net devices under 0000:31:00.0: cvl_0_0 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:54.072 Found net devices under 0000:31:00.1: cvl_0_1 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:54.072 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:54.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:54.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:40:54.334 00:40:54.334 --- 10.0.0.2 ping statistics --- 00:40:54.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:54.334 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:54.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:54.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:40:54.334 00:40:54.334 --- 10.0.0.1 ping statistics --- 00:40:54.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:54.334 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2831971 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2831971 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2831971 ']' 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:54.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:54.334 11:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:54.334 [2024-12-07 11:52:53.591429] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:54.334 [2024-12-07 11:52:53.593751] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:40:54.334 [2024-12-07 11:52:53.593831] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:54.595 [2024-12-07 11:52:53.728056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:54.595 [2024-12-07 11:52:53.823064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:54.595 [2024-12-07 11:52:53.823108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:54.595 [2024-12-07 11:52:53.823121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:54.595 [2024-12-07 11:52:53.823135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:54.595 [2024-12-07 11:52:53.823146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:54.595 [2024-12-07 11:52:53.824327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.856 [2024-12-07 11:52:54.067930] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:54.856 [2024-12-07 11:52:54.068229] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:55.117 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:55.117 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:40:55.117 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:55.117 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:55.117 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:55.117 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:55.117 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:55.378 [2024-12-07 11:52:54.521091] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:55.378 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:40:55.378 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:55.378 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:55.379 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:55.379 ************************************ 00:40:55.379 START TEST lvs_grow_clean 00:40:55.379 ************************************ 00:40:55.379 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:40:55.379 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:55.379 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:55.379 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:55.379 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:55.379 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:55.379 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:55.379 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:55.379 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:55.379 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:55.640 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:55.640 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:55.640 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=818be392-16e8-414b-b9ed-414488c4a93f 00:40:55.640 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 818be392-16e8-414b-b9ed-414488c4a93f 00:40:55.640 11:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:55.904 11:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:55.904 11:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:55.904 11:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 818be392-16e8-414b-b9ed-414488c4a93f lvol 150 00:40:56.165 11:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e5d5622e-9923-4e8c-8543-e2d82230fb96 00:40:56.165 11:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:56.165 11:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:56.165 [2024-12-07 11:52:55.485009] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:56.165 [2024-12-07 11:52:55.485163] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:56.165 true 00:40:56.165 11:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 818be392-16e8-414b-b9ed-414488c4a93f 00:40:56.165 11:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:56.425 11:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:56.425 11:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:56.685 11:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e5d5622e-9923-4e8c-8543-e2d82230fb96 00:40:56.685 11:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:56.944 [2024-12-07 11:52:56.161292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:56.944 11:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:57.203 11:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2832647 00:40:57.203 11:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:57.203 11:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2832647 /var/tmp/bdevperf.sock 00:40:57.203 11:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2832647 ']' 00:40:57.203 11:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:57.203 11:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:57.203 11:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:57.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:57.203 11:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:57.203 11:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:57.203 11:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:57.203 [2024-12-07 11:52:56.420949] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:40:57.203 [2024-12-07 11:52:56.421066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832647 ] 00:40:57.463 [2024-12-07 11:52:56.565963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:57.463 [2024-12-07 11:52:56.664802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:58.032 11:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:58.032 11:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:40:58.032 11:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:58.292 Nvme0n1 00:40:58.292 11:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:58.553 [ 00:40:58.553 { 00:40:58.553 "name": "Nvme0n1", 00:40:58.553 "aliases": [ 00:40:58.553 "e5d5622e-9923-4e8c-8543-e2d82230fb96" 00:40:58.553 ], 00:40:58.553 "product_name": "NVMe disk", 00:40:58.553 "block_size": 4096, 00:40:58.553 "num_blocks": 38912, 00:40:58.553 "uuid": "e5d5622e-9923-4e8c-8543-e2d82230fb96", 00:40:58.553 "numa_id": 0, 00:40:58.553 "assigned_rate_limits": { 00:40:58.553 "rw_ios_per_sec": 0, 00:40:58.553 "rw_mbytes_per_sec": 0, 00:40:58.553 "r_mbytes_per_sec": 0, 00:40:58.553 "w_mbytes_per_sec": 0 00:40:58.553 }, 00:40:58.553 "claimed": false, 00:40:58.553 "zoned": false, 00:40:58.553 "supported_io_types": { 00:40:58.553 "read": true, 00:40:58.553 "write": true, 00:40:58.553 "unmap": true, 00:40:58.553 "flush": true, 00:40:58.553 "reset": true, 00:40:58.553 "nvme_admin": true, 00:40:58.553 "nvme_io": true, 00:40:58.553 "nvme_io_md": false, 00:40:58.553 "write_zeroes": true, 00:40:58.553 "zcopy": false, 00:40:58.553 "get_zone_info": false, 00:40:58.553 "zone_management": false, 00:40:58.553 "zone_append": false, 00:40:58.553 "compare": true, 00:40:58.553 "compare_and_write": true, 00:40:58.553 "abort": true, 00:40:58.553 "seek_hole": false, 00:40:58.553 "seek_data": false, 00:40:58.553 "copy": true, 00:40:58.553 "nvme_iov_md": false 00:40:58.553 }, 00:40:58.553 "memory_domains": [ 00:40:58.553 { 00:40:58.553 "dma_device_id": "system", 00:40:58.553 "dma_device_type": 1 00:40:58.553 } 00:40:58.553 ], 00:40:58.553 "driver_specific": { 00:40:58.553 "nvme": [ 00:40:58.553 { 00:40:58.553 "trid": { 00:40:58.553 "trtype": "TCP", 00:40:58.553 "adrfam": "IPv4", 00:40:58.553 "traddr": "10.0.0.2", 00:40:58.553 "trsvcid": "4420", 00:40:58.553 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:58.553 }, 00:40:58.553 "ctrlr_data": { 00:40:58.553 "cntlid": 1, 00:40:58.553 "vendor_id": "0x8086", 00:40:58.553 "model_number": "SPDK bdev Controller", 00:40:58.553 "serial_number": "SPDK0", 00:40:58.553 "firmware_revision": "25.01", 00:40:58.554 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:58.554 "oacs": { 00:40:58.554 "security": 0, 00:40:58.554 "format": 0, 00:40:58.554 "firmware": 0, 00:40:58.554 "ns_manage": 0 00:40:58.554 }, 00:40:58.554 "multi_ctrlr": true, 00:40:58.554 "ana_reporting": false 00:40:58.554 }, 00:40:58.554 "vs": { 00:40:58.554 "nvme_version": "1.3" 00:40:58.554 }, 00:40:58.554 "ns_data": { 00:40:58.554 "id": 1, 00:40:58.554 "can_share": true 00:40:58.554 } 00:40:58.554 } 00:40:58.554 ], 00:40:58.554 "mp_policy": "active_passive" 00:40:58.554 } 00:40:58.554 } 00:40:58.554 ] 00:40:58.554 11:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2832801 00:40:58.554 11:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:58.554 11:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:58.554 Running I/O for 10 seconds... 00:40:59.495 Latency(us) 00:40:59.495 [2024-12-07T10:52:58.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:59.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:59.495 Nvme0n1 : 1.00 16012.00 62.55 0.00 0.00 0.00 0.00 0.00 00:40:59.495 [2024-12-07T10:52:58.849Z] =================================================================================================================== 00:40:59.495 [2024-12-07T10:52:58.849Z] Total : 16012.00 62.55 0.00 0.00 0.00 0.00 0.00 00:40:59.495 00:41:00.439 11:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 818be392-16e8-414b-b9ed-414488c4a93f 00:41:00.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:00.700 Nvme0n1 : 2.00 16166.00 63.15 0.00 0.00 0.00 0.00 0.00 00:41:00.700 [2024-12-07T10:53:00.054Z] =================================================================================================================== 00:41:00.700 [2024-12-07T10:53:00.054Z] Total : 16166.00 63.15 0.00 0.00 0.00 0.00 0.00 00:41:00.700 00:41:00.700 true 00:41:00.700 11:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 818be392-16e8-414b-b9ed-414488c4a93f 00:41:00.700 11:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:00.961 11:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:00.961 11:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:00.961 11:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2832801 00:41:01.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:01.531 Nvme0n1 : 3.00 16217.33 63.35 0.00 0.00 0.00 0.00 0.00 00:41:01.531 [2024-12-07T10:53:00.885Z] =================================================================================================================== 00:41:01.531 [2024-12-07T10:53:00.885Z] Total : 16217.33 63.35 0.00 0.00 0.00 0.00 0.00 00:41:01.531 00:41:02.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:02.918 Nvme0n1 : 4.00 16274.50 63.57 0.00 0.00 0.00 0.00 0.00 00:41:02.918 [2024-12-07T10:53:02.272Z] =================================================================================================================== 00:41:02.918 [2024-12-07T10:53:02.272Z] Total : 16274.50 63.57 0.00 0.00 0.00 0.00 0.00 00:41:02.918 00:41:03.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:03.865 Nvme0n1 : 5.00 16296.20 63.66 0.00 0.00 0.00 0.00 0.00 00:41:03.865 [2024-12-07T10:53:03.219Z] =================================================================================================================== 00:41:03.865 [2024-12-07T10:53:03.219Z] Total : 16296.20 63.66 0.00 0.00 0.00 0.00 0.00 00:41:03.865 00:41:04.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:04.807 Nvme0n1 : 6.00 16321.33 63.76 0.00 0.00 0.00 0.00 0.00 00:41:04.807 [2024-12-07T10:53:04.161Z] =================================================================================================================== 00:41:04.807 [2024-12-07T10:53:04.161Z] Total : 16321.33 63.76 0.00 0.00 0.00 0.00 0.00 00:41:04.807 00:41:05.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:05.746 Nvme0n1 : 7.00 16339.14 63.82 0.00 0.00 0.00 0.00 0.00 00:41:05.746 [2024-12-07T10:53:05.100Z] =================================================================================================================== 00:41:05.746 [2024-12-07T10:53:05.100Z] Total : 16339.14 63.82 0.00 0.00 0.00 0.00 0.00 00:41:05.746 00:41:06.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:06.693 Nvme0n1 : 8.00 16360.50 63.91 0.00 0.00 0.00 0.00 0.00 00:41:06.693 [2024-12-07T10:53:06.047Z] =================================================================================================================== 00:41:06.693 [2024-12-07T10:53:06.047Z] Total : 16360.50 63.91 0.00 0.00 0.00 0.00 0.00 00:41:06.693 00:41:07.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:07.633 Nvme0n1 : 9.00 16377.11 63.97 0.00 0.00 0.00 0.00 0.00 00:41:07.633 [2024-12-07T10:53:06.988Z] =================================================================================================================== 00:41:07.634 [2024-12-07T10:53:06.988Z] Total : 16377.11 63.97 0.00 0.00 0.00 0.00 0.00 00:41:07.634 00:41:08.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:08.576 Nvme0n1 : 10.00 16384.10 64.00 0.00 0.00 0.00 0.00 0.00 00:41:08.576 [2024-12-07T10:53:07.930Z] =================================================================================================================== 00:41:08.576 [2024-12-07T10:53:07.930Z] Total : 16384.10 64.00 0.00 0.00 0.00 0.00 0.00 00:41:08.576 00:41:08.576 00:41:08.576 Latency(us) 00:41:08.576 [2024-12-07T10:53:07.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:08.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:08.576 Nvme0n1 : 10.00 16385.08 64.00 0.00 0.00 7808.06 3358.72 15619.41 00:41:08.576 [2024-12-07T10:53:07.930Z] =================================================================================================================== 00:41:08.576 [2024-12-07T10:53:07.930Z] Total : 16385.08 64.00 0.00 0.00 7808.06 3358.72 15619.41 00:41:08.576 { 00:41:08.576 "results": [ 00:41:08.576 { 00:41:08.576 "job": "Nvme0n1", 00:41:08.576 "core_mask": "0x2", 00:41:08.576 "workload": "randwrite", 00:41:08.576 "status": "finished", 00:41:08.576 "queue_depth": 128, 00:41:08.576 "io_size": 4096, 00:41:08.576 "runtime": 10.003307, 00:41:08.576 "iops": 16385.081453563307, 00:41:08.576 "mibps": 64.00422442798167, 00:41:08.576 "io_failed": 0, 00:41:08.576 "io_timeout": 0, 00:41:08.576 "avg_latency_us": 7808.05773751055, 00:41:08.576 "min_latency_us": 3358.72, 00:41:08.576 "max_latency_us": 15619.413333333334 00:41:08.576 } 00:41:08.576 ], 00:41:08.576 "core_count": 1 00:41:08.576 } 00:41:08.576 11:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2832647 00:41:08.576 11:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2832647 ']' 00:41:08.576 11:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2832647 00:41:08.576 11:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:41:08.576 11:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:08.576 11:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832647 00:41:08.836 11:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:08.836 11:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:08.836 11:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832647' 00:41:08.836 killing process with pid 2832647 00:41:08.836 11:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2832647 00:41:08.836 Received shutdown signal, test time was about 10.000000 seconds 00:41:08.836 00:41:08.836 Latency(us) 00:41:08.836 [2024-12-07T10:53:08.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:08.836 [2024-12-07T10:53:08.190Z] =================================================================================================================== 00:41:08.836 [2024-12-07T10:53:08.190Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:08.836 11:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2832647 00:41:09.096 11:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:09.358 11:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:09.619 11:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 818be392-16e8-414b-b9ed-414488c4a93f 00:41:09.619 11:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:09.619 11:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:09.880 11:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:41:09.880 11:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:09.880 [2024-12-07 11:53:09.137125] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:09.880 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 818be392-16e8-414b-b9ed-414488c4a93f 00:41:09.880 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:41:09.880 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 818be392-16e8-414b-b9ed-414488c4a93f 00:41:09.880 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:09.880 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:09.880 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:09.880 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:09.880 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:09.880 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:09.880 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:09.880 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:09.880 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 818be392-16e8-414b-b9ed-414488c4a93f 00:41:10.141 request: 00:41:10.141 { 00:41:10.141 "uuid": "818be392-16e8-414b-b9ed-414488c4a93f", 00:41:10.141 "method": "bdev_lvol_get_lvstores", 00:41:10.141 "req_id": 1 00:41:10.141 } 00:41:10.141 Got JSON-RPC error response 00:41:10.141 response: 00:41:10.141 { 00:41:10.141 "code": -19, 00:41:10.141 "message": "No such device" 00:41:10.141 } 00:41:10.141 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:41:10.141 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:10.141 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:10.141 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:10.141 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:10.401 aio_bdev 00:41:10.401 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e5d5622e-9923-4e8c-8543-e2d82230fb96 00:41:10.401 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e5d5622e-9923-4e8c-8543-e2d82230fb96 00:41:10.401 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:10.401 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:41:10.401 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:10.401 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:10.401 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:10.402 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e5d5622e-9923-4e8c-8543-e2d82230fb96 -t 2000 00:41:10.663 [ 00:41:10.663 { 00:41:10.663 "name": "e5d5622e-9923-4e8c-8543-e2d82230fb96", 00:41:10.663 "aliases": [ 00:41:10.663 "lvs/lvol" 00:41:10.663 ], 00:41:10.663 "product_name": "Logical Volume", 00:41:10.663 "block_size": 4096, 00:41:10.663 "num_blocks": 38912, 00:41:10.663 "uuid": "e5d5622e-9923-4e8c-8543-e2d82230fb96", 00:41:10.663 "assigned_rate_limits": { 00:41:10.663 "rw_ios_per_sec": 0, 00:41:10.663 "rw_mbytes_per_sec": 0, 00:41:10.663 "r_mbytes_per_sec": 0, 00:41:10.663 "w_mbytes_per_sec": 0 00:41:10.663 }, 00:41:10.663 "claimed": false, 00:41:10.663 "zoned": false, 00:41:10.663 "supported_io_types": { 00:41:10.663 "read": true, 00:41:10.663 "write": true, 00:41:10.663 "unmap": true, 00:41:10.663 "flush": false, 00:41:10.664 "reset": true, 00:41:10.664 "nvme_admin": false, 00:41:10.664 "nvme_io": false, 00:41:10.664 "nvme_io_md": false, 00:41:10.664 "write_zeroes": true, 00:41:10.664 "zcopy": false, 00:41:10.664 "get_zone_info": false, 00:41:10.664 "zone_management": false, 00:41:10.664 "zone_append": false, 00:41:10.664 "compare": false, 00:41:10.664 "compare_and_write": false, 00:41:10.664 "abort": false, 00:41:10.664 "seek_hole": true, 00:41:10.664 "seek_data": true, 00:41:10.664 "copy": false, 00:41:10.664 "nvme_iov_md": false 00:41:10.664 }, 00:41:10.664 "driver_specific": { 00:41:10.664 "lvol": { 00:41:10.664 "lvol_store_uuid": "818be392-16e8-414b-b9ed-414488c4a93f", 00:41:10.664 "base_bdev": "aio_bdev", 00:41:10.664 "thin_provision": false, 00:41:10.664 "num_allocated_clusters": 38, 00:41:10.664 "snapshot": false, 00:41:10.664 "clone": false, 00:41:10.664 "esnap_clone": false 00:41:10.664 } 00:41:10.664 } 00:41:10.664 } 00:41:10.664 ] 00:41:10.664 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:41:10.664 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 818be392-16e8-414b-b9ed-414488c4a93f 00:41:10.664 11:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:10.925 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:10.925 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 818be392-16e8-414b-b9ed-414488c4a93f 00:41:10.925 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:10.925 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:10.925 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e5d5622e-9923-4e8c-8543-e2d82230fb96 00:41:11.186 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 818be392-16e8-414b-b9ed-414488c4a93f 00:41:11.447 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:11.707 00:41:11.707 real 0m16.261s 00:41:11.707 user 0m15.899s 00:41:11.707 sys 0m1.407s 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:11.707 ************************************ 00:41:11.707 END TEST lvs_grow_clean 00:41:11.707 ************************************ 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:11.707 ************************************ 00:41:11.707 START TEST lvs_grow_dirty 00:41:11.707 ************************************ 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:11.707 11:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:11.968 11:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:11.968 11:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:12.229 11:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3d094379-f6b5-458d-8127-d415783e6758 00:41:12.229 11:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d094379-f6b5-458d-8127-d415783e6758 00:41:12.229 11:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:12.229 11:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:12.229 11:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:12.229 11:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3d094379-f6b5-458d-8127-d415783e6758 lvol 150 00:41:12.490 11:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79 00:41:12.490 11:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:12.490 11:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:12.753 [2024-12-07 11:53:11.857126] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:12.753 [2024-12-07 11:53:11.857316] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:12.753 true 00:41:12.753 11:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d094379-f6b5-458d-8127-d415783e6758 00:41:12.753 11:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:12.753 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:12.753 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:13.015 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79 00:41:13.277 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:13.277 [2024-12-07 11:53:12.569507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:13.277 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:13.539 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2835757 00:41:13.539 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:13.539 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:13.539 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2835757 /var/tmp/bdevperf.sock 00:41:13.539 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2835757 ']' 00:41:13.539 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:13.539 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:13.539 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:13.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:13.539 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:13.539 11:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:13.539 [2024-12-07 11:53:12.844543] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:13.539 [2024-12-07 11:53:12.844677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835757 ] 00:41:13.801 [2024-12-07 11:53:12.999822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:13.801 [2024-12-07 11:53:13.121530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:14.376 11:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:14.376 11:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:14.376 11:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:14.638 Nvme0n1 00:41:14.638 11:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:14.899 [ 00:41:14.899 { 00:41:14.899 "name": "Nvme0n1", 00:41:14.899 "aliases": [ 00:41:14.899 "6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79" 00:41:14.899 ], 00:41:14.899 "product_name": "NVMe disk", 00:41:14.899 "block_size": 4096, 00:41:14.899 "num_blocks": 38912, 00:41:14.899 "uuid": "6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79", 00:41:14.899 "numa_id": 0, 00:41:14.899 "assigned_rate_limits": { 00:41:14.899 "rw_ios_per_sec": 0, 00:41:14.899 "rw_mbytes_per_sec": 0, 00:41:14.899 "r_mbytes_per_sec": 0, 00:41:14.899 "w_mbytes_per_sec": 0 00:41:14.899 }, 00:41:14.899 "claimed": false, 00:41:14.899 "zoned": false, 00:41:14.899 "supported_io_types": { 00:41:14.899 "read": true, 00:41:14.899 "write": true, 00:41:14.899 "unmap": true, 00:41:14.899 "flush": true, 00:41:14.899 "reset": true, 00:41:14.899 "nvme_admin": true, 00:41:14.899 "nvme_io": true, 00:41:14.899 "nvme_io_md": false, 00:41:14.899 "write_zeroes": true, 00:41:14.899 "zcopy": false, 00:41:14.899 "get_zone_info": false, 00:41:14.899 "zone_management": false, 00:41:14.899 "zone_append": false, 00:41:14.899 "compare": true, 00:41:14.899 "compare_and_write": true, 00:41:14.899 "abort": true, 00:41:14.899 "seek_hole": false, 00:41:14.899 "seek_data": false, 00:41:14.899 "copy": true, 00:41:14.899 "nvme_iov_md": false 00:41:14.899 }, 00:41:14.899 "memory_domains": [ 00:41:14.899 { 00:41:14.899 "dma_device_id": "system", 00:41:14.899 "dma_device_type": 1 00:41:14.899 } 00:41:14.899 ], 00:41:14.899 "driver_specific": { 00:41:14.899 "nvme": [ 00:41:14.899 { 00:41:14.899 "trid": { 00:41:14.899 "trtype": "TCP", 00:41:14.899 "adrfam": "IPv4", 00:41:14.899 "traddr": "10.0.0.2", 00:41:14.899 "trsvcid": "4420", 00:41:14.899 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:14.899 }, 00:41:14.899 "ctrlr_data": { 00:41:14.899 "cntlid": 1, 00:41:14.899 "vendor_id": "0x8086", 00:41:14.899 "model_number": "SPDK bdev Controller", 00:41:14.899 "serial_number": "SPDK0", 00:41:14.899 "firmware_revision": "25.01", 00:41:14.899 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:14.899 "oacs": { 00:41:14.899 "security": 0, 00:41:14.899 "format": 0, 00:41:14.899 "firmware": 0, 00:41:14.899 "ns_manage": 0 00:41:14.899 }, 00:41:14.899 "multi_ctrlr": true, 00:41:14.899 "ana_reporting": false 00:41:14.899 }, 00:41:14.899 "vs": { 00:41:14.899 "nvme_version": "1.3" 00:41:14.899 }, 00:41:14.899 "ns_data": { 00:41:14.899 "id": 1, 00:41:14.899 "can_share": true 00:41:14.899 } 00:41:14.899 } 00:41:14.899 ], 00:41:14.899 "mp_policy": "active_passive" 00:41:14.899 } 00:41:14.899 } 00:41:14.899 ] 00:41:14.899 11:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2835863 00:41:14.899 11:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:14.899 11:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:14.899 Running I/O for 10 seconds... 00:41:15.895 Latency(us) 00:41:15.895 [2024-12-07T10:53:15.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:15.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:15.896 Nvme0n1 : 1.00 16129.00 63.00 0.00 0.00 0.00 0.00 0.00 00:41:15.896 [2024-12-07T10:53:15.250Z] =================================================================================================================== 00:41:15.896 [2024-12-07T10:53:15.250Z] Total : 16129.00 63.00 0.00 0.00 0.00 0.00 0.00 00:41:15.896 00:41:16.916 11:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3d094379-f6b5-458d-8127-d415783e6758 00:41:16.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:16.916 Nvme0n1 : 2.00 16192.50 63.25 0.00 0.00 0.00 0.00 0.00 00:41:16.916 [2024-12-07T10:53:16.270Z] =================================================================================================================== 00:41:16.916 [2024-12-07T10:53:16.270Z] Total : 16192.50 63.25 0.00 0.00 0.00 0.00 0.00 00:41:16.916 00:41:16.916 true 00:41:16.916 11:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d094379-f6b5-458d-8127-d415783e6758 00:41:16.916 11:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:17.176 11:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:17.176 11:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:17.176 11:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2835863 00:41:18.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:18.116 Nvme0n1 : 3.00 16256.00 63.50 0.00 0.00 0.00 0.00 0.00 00:41:18.116 [2024-12-07T10:53:17.470Z] =================================================================================================================== 00:41:18.116 [2024-12-07T10:53:17.470Z] Total : 16256.00 63.50 0.00 0.00 0.00 0.00 0.00 00:41:18.116 00:41:19.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:19.058 Nvme0n1 : 4.00 16287.75 63.62 0.00 0.00 0.00 0.00 0.00 00:41:19.058 [2024-12-07T10:53:18.412Z] =================================================================================================================== 00:41:19.058 [2024-12-07T10:53:18.412Z] Total : 16287.75 63.62 0.00 0.00 0.00 0.00 0.00 00:41:19.058 00:41:20.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:20.000 Nvme0n1 : 5.00 16319.60 63.75 0.00 0.00 0.00 0.00 0.00 00:41:20.000 [2024-12-07T10:53:19.354Z] =================================================================================================================== 00:41:20.000 [2024-12-07T10:53:19.354Z] Total : 16319.60 63.75 0.00 0.00 0.00 0.00 0.00 00:41:20.000 00:41:20.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:20.943 Nvme0n1 : 6.00 16340.67 63.83 0.00 0.00 0.00 0.00 0.00 00:41:20.943 [2024-12-07T10:53:20.297Z] =================================================================================================================== 00:41:20.943 [2024-12-07T10:53:20.297Z] Total : 16340.67 63.83 0.00 0.00 0.00 0.00 0.00 00:41:20.943 00:41:21.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:21.891 Nvme0n1 : 7.00 16363.14 63.92 0.00 0.00 0.00 0.00 0.00 00:41:21.891 [2024-12-07T10:53:21.245Z] =================================================================================================================== 00:41:21.891 [2024-12-07T10:53:21.245Z] Total : 16363.14 63.92 0.00 0.00 0.00 0.00 0.00 00:41:21.891 00:41:22.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:22.828 Nvme0n1 : 8.00 16367.12 63.93 0.00 0.00 0.00 0.00 0.00 00:41:22.828 [2024-12-07T10:53:22.182Z] =================================================================================================================== 00:41:22.828 [2024-12-07T10:53:22.182Z] Total : 16367.12 63.93 0.00 0.00 0.00 0.00 0.00 00:41:22.828 00:41:24.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:24.210 Nvme0n1 : 9.00 16383.00 64.00 0.00 0.00 0.00 0.00 0.00 00:41:24.210 [2024-12-07T10:53:23.564Z] =================================================================================================================== 00:41:24.210 [2024-12-07T10:53:23.564Z] Total : 16383.00 64.00 0.00 0.00 0.00 0.00 0.00 00:41:24.210 00:41:25.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:25.151 Nvme0n1 : 10.00 16395.70 64.05 0.00 0.00 0.00 0.00 0.00 00:41:25.151 [2024-12-07T10:53:24.505Z] =================================================================================================================== 00:41:25.151 [2024-12-07T10:53:24.505Z] Total : 16395.70 64.05 0.00 0.00 0.00 0.00 0.00 00:41:25.151 00:41:25.151 00:41:25.151 Latency(us) 00:41:25.151 [2024-12-07T10:53:24.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:25.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:25.151 Nvme0n1 : 10.01 16395.42 64.04 0.00 0.00 7803.27 5952.85 17694.72 00:41:25.151 [2024-12-07T10:53:24.505Z] =================================================================================================================== 00:41:25.151 [2024-12-07T10:53:24.505Z] Total : 16395.42 64.04 0.00 0.00 7803.27 5952.85 17694.72 00:41:25.151 { 00:41:25.151 "results": [ 00:41:25.151 { 00:41:25.151 "job": "Nvme0n1", 00:41:25.151 "core_mask": "0x2", 00:41:25.151 "workload": "randwrite", 00:41:25.151 "status": "finished", 00:41:25.151 "queue_depth": 128, 00:41:25.151 "io_size": 4096, 00:41:25.151 "runtime": 10.00798, 00:41:25.151 "iops": 16395.416457666783, 00:41:25.151 "mibps": 64.04459553776087, 00:41:25.151 "io_failed": 0, 00:41:25.151 "io_timeout": 0, 00:41:25.151 "avg_latency_us": 7803.269542493219, 00:41:25.151 "min_latency_us": 5952.8533333333335, 00:41:25.151 "max_latency_us": 17694.72 00:41:25.151 } 00:41:25.151 ], 00:41:25.151 "core_count": 1 00:41:25.151 } 00:41:25.151 11:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2835757 00:41:25.151 11:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2835757 ']' 00:41:25.151 11:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2835757 00:41:25.151 11:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:41:25.151 11:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:25.151 11:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2835757 00:41:25.151 11:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:25.151 11:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:25.151 11:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2835757' 00:41:25.151 killing process with pid 2835757 00:41:25.151 11:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2835757 00:41:25.151 Received shutdown signal, test time was about 10.000000 seconds 00:41:25.151 00:41:25.151 Latency(us) 00:41:25.151 [2024-12-07T10:53:24.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:25.151 [2024-12-07T10:53:24.505Z] =================================================================================================================== 00:41:25.151 [2024-12-07T10:53:24.505Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:25.151 11:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2835757 00:41:25.412 11:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:25.671 11:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:25.935 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d094379-f6b5-458d-8127-d415783e6758 00:41:25.935 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:25.935 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:25.935 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:41:25.935 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2831971 00:41:25.935 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2831971 00:41:26.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2831971 Killed "${NVMF_APP[@]}" "$@" 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2838062 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2838062 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2838062 ']' 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:26.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:26.196 11:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:26.196 [2024-12-07 11:53:25.439288] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:26.196 [2024-12-07 11:53:25.441910] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:26.196 [2024-12-07 11:53:25.442025] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:26.456 [2024-12-07 11:53:25.596462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:26.456 [2024-12-07 11:53:25.693424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:26.456 [2024-12-07 11:53:25.693467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:26.457 [2024-12-07 11:53:25.693480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:26.457 [2024-12-07 11:53:25.693493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:26.457 [2024-12-07 11:53:25.693505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:26.457 [2024-12-07 11:53:25.694720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:26.718 [2024-12-07 11:53:25.937195] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:26.718 [2024-12-07 11:53:25.937485] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:26.980 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:26.980 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:26.980 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:26.980 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:26.980 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:26.980 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:26.980 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:27.242 [2024-12-07 11:53:26.406613] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:27.242 [2024-12-07 11:53:26.406816] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:27.242 [2024-12-07 11:53:26.406866] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:27.242 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:41:27.242 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79 00:41:27.242 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79 00:41:27.242 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:27.242 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:27.242 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:27.242 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:27.242 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:27.504 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79 -t 2000 00:41:27.504 [ 00:41:27.504 { 00:41:27.504 "name": "6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79", 00:41:27.504 "aliases": [ 00:41:27.504 "lvs/lvol" 00:41:27.504 ], 00:41:27.504 "product_name": "Logical Volume", 00:41:27.504 "block_size": 4096, 00:41:27.504 "num_blocks": 38912, 00:41:27.504 "uuid": "6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79", 00:41:27.504 "assigned_rate_limits": { 00:41:27.504 "rw_ios_per_sec": 0, 00:41:27.504 "rw_mbytes_per_sec": 0, 00:41:27.504 "r_mbytes_per_sec": 0, 00:41:27.504 "w_mbytes_per_sec": 0 00:41:27.504 }, 00:41:27.504 "claimed": false, 00:41:27.504 "zoned": false, 00:41:27.504 "supported_io_types": { 00:41:27.504 "read": true, 00:41:27.504 "write": true, 00:41:27.504 "unmap": true, 00:41:27.504 "flush": false, 00:41:27.504 "reset": true, 00:41:27.504 "nvme_admin": false, 00:41:27.504 "nvme_io": false, 00:41:27.504 "nvme_io_md": false, 00:41:27.504 "write_zeroes": true, 00:41:27.504 "zcopy": false, 00:41:27.504 "get_zone_info": false, 00:41:27.504 "zone_management": false, 00:41:27.504 "zone_append": false, 00:41:27.504 "compare": false, 00:41:27.504 "compare_and_write": false, 00:41:27.504 "abort": false, 00:41:27.504 "seek_hole": true, 00:41:27.504 "seek_data": true, 00:41:27.504 "copy": false, 00:41:27.504 "nvme_iov_md": false 00:41:27.504 }, 00:41:27.504 "driver_specific": { 00:41:27.504 "lvol": { 00:41:27.504 "lvol_store_uuid": "3d094379-f6b5-458d-8127-d415783e6758", 00:41:27.504 "base_bdev": "aio_bdev", 00:41:27.504 "thin_provision": false, 00:41:27.504 "num_allocated_clusters": 38, 00:41:27.504 "snapshot": false, 00:41:27.504 "clone": false, 00:41:27.504 "esnap_clone": false 00:41:27.504 } 00:41:27.504 } 00:41:27.504 } 00:41:27.504 ] 00:41:27.504 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:27.504 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d094379-f6b5-458d-8127-d415783e6758 00:41:27.504 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:41:27.766 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:41:27.766 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d094379-f6b5-458d-8127-d415783e6758 00:41:27.766 11:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:41:28.026 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:41:28.026 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:28.026 [2024-12-07 11:53:27.331500] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:28.026 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d094379-f6b5-458d-8127-d415783e6758 00:41:28.026 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:41:28.026 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d094379-f6b5-458d-8127-d415783e6758 00:41:28.026 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:28.287 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:28.287 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:28.287 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:28.287 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:28.287 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:28.287 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:28.287 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:28.287 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d094379-f6b5-458d-8127-d415783e6758 00:41:28.287 request: 00:41:28.287 { 00:41:28.287 "uuid": "3d094379-f6b5-458d-8127-d415783e6758", 00:41:28.287 "method": "bdev_lvol_get_lvstores", 00:41:28.287 "req_id": 1 00:41:28.287 } 00:41:28.287 Got JSON-RPC error response 00:41:28.287 response: 00:41:28.287 { 00:41:28.287 "code": -19, 00:41:28.287 "message": "No such device" 00:41:28.287 } 00:41:28.287 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:41:28.287 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:28.287 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:28.287 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:28.287 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:28.548 aio_bdev 00:41:28.548 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79 00:41:28.548 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79 00:41:28.548 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:28.548 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:28.548 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:28.548 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:28.548 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:28.808 11:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79 -t 2000 00:41:28.808 [ 00:41:28.808 { 00:41:28.808 "name": "6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79", 00:41:28.808 "aliases": [ 00:41:28.808 "lvs/lvol" 00:41:28.808 ], 00:41:28.808 "product_name": "Logical Volume", 00:41:28.808 "block_size": 4096, 00:41:28.808 "num_blocks": 38912, 00:41:28.808 "uuid": "6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79", 00:41:28.808 "assigned_rate_limits": { 00:41:28.808 "rw_ios_per_sec": 0, 00:41:28.808 "rw_mbytes_per_sec": 0, 00:41:28.808 "r_mbytes_per_sec": 0, 00:41:28.808 "w_mbytes_per_sec": 0 00:41:28.808 }, 00:41:28.808 "claimed": false, 00:41:28.808 "zoned": false, 00:41:28.808 "supported_io_types": { 00:41:28.808 "read": true, 00:41:28.808 "write": true, 00:41:28.808 "unmap": true, 00:41:28.808 "flush": false, 00:41:28.808 "reset": true, 00:41:28.808 "nvme_admin": false, 00:41:28.808 "nvme_io": false, 00:41:28.808 "nvme_io_md": false, 00:41:28.808 "write_zeroes": true, 00:41:28.808 "zcopy": false, 00:41:28.808 "get_zone_info": false, 00:41:28.808 "zone_management": false, 00:41:28.808 "zone_append": false, 00:41:28.808 "compare": false, 00:41:28.808 "compare_and_write": false, 00:41:28.808 "abort": false, 00:41:28.808 "seek_hole": true, 00:41:28.808 "seek_data": true, 00:41:28.808 "copy": false, 00:41:28.808 "nvme_iov_md": false 00:41:28.808 }, 00:41:28.808 "driver_specific": { 00:41:28.808 "lvol": { 00:41:28.808 "lvol_store_uuid": "3d094379-f6b5-458d-8127-d415783e6758", 00:41:28.808 "base_bdev": "aio_bdev", 00:41:28.808 "thin_provision": false, 00:41:28.808 "num_allocated_clusters": 38, 00:41:28.809 "snapshot": false, 00:41:28.809 "clone": false, 00:41:28.809 "esnap_clone": false 00:41:28.809 } 00:41:28.809 } 00:41:28.809 } 00:41:28.809 ] 00:41:28.809 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:28.809 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d094379-f6b5-458d-8127-d415783e6758 00:41:28.809 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:29.070 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:29.070 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d094379-f6b5-458d-8127-d415783e6758 00:41:29.070 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:29.070 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:29.070 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6f5b59dd-4a5c-428f-a3e7-1e01e31bcb79 00:41:29.330 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3d094379-f6b5-458d-8127-d415783e6758 00:41:29.590 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:29.590 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:29.851 00:41:29.851 real 0m18.023s 00:41:29.851 user 0m35.730s 00:41:29.851 sys 0m3.346s 00:41:29.851 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:29.851 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:29.851 ************************************ 00:41:29.851 END TEST lvs_grow_dirty 00:41:29.851 ************************************ 00:41:29.851 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:41:29.851 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:41:29.851 11:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:41:29.851 nvmf_trace.0 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:29.851 rmmod nvme_tcp 00:41:29.851 rmmod nvme_fabrics 00:41:29.851 rmmod nvme_keyring 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2838062 ']' 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2838062 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2838062 ']' 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2838062 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2838062 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2838062' 00:41:29.851 killing process with pid 2838062 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2838062 00:41:29.851 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2838062 00:41:30.792 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:30.792 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:30.792 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:30.792 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:41:30.792 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:41:30.792 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:30.792 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:41:30.792 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:30.792 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:30.793 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:30.793 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:30.793 11:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:32.706 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:32.706 00:41:32.706 real 0m45.516s 00:41:32.706 user 0m55.066s 00:41:32.706 sys 0m10.530s 00:41:32.706 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:32.706 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:32.706 ************************************ 00:41:32.706 END TEST nvmf_lvs_grow 00:41:32.706 ************************************ 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:32.967 ************************************ 00:41:32.967 START TEST nvmf_bdev_io_wait 00:41:32.967 ************************************ 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:32.967 * Looking for test storage... 00:41:32.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:41:32.967 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:41:32.968 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:32.968 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:32.968 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:41:32.968 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:41:32.968 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:32.968 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:41:32.968 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:41:32.968 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:41:32.968 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:41:32.968 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:32.968 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:41:32.968 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:33.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.230 --rc genhtml_branch_coverage=1 00:41:33.230 --rc genhtml_function_coverage=1 00:41:33.230 --rc genhtml_legend=1 00:41:33.230 --rc geninfo_all_blocks=1 00:41:33.230 --rc geninfo_unexecuted_blocks=1 00:41:33.230 00:41:33.230 ' 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:33.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.230 --rc genhtml_branch_coverage=1 00:41:33.230 --rc genhtml_function_coverage=1 00:41:33.230 --rc genhtml_legend=1 00:41:33.230 --rc geninfo_all_blocks=1 00:41:33.230 --rc geninfo_unexecuted_blocks=1 00:41:33.230 00:41:33.230 ' 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:33.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.230 --rc genhtml_branch_coverage=1 00:41:33.230 --rc genhtml_function_coverage=1 00:41:33.230 --rc genhtml_legend=1 00:41:33.230 --rc geninfo_all_blocks=1 00:41:33.230 --rc geninfo_unexecuted_blocks=1 00:41:33.230 00:41:33.230 ' 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:33.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.230 --rc genhtml_branch_coverage=1 00:41:33.230 --rc genhtml_function_coverage=1 00:41:33.230 --rc genhtml_legend=1 00:41:33.230 --rc geninfo_all_blocks=1 00:41:33.230 --rc geninfo_unexecuted_blocks=1 00:41:33.230 00:41:33.230 ' 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:41:33.230 11:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:41.364 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:41.364 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:41.364 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:41.365 Found net devices under 0000:31:00.0: cvl_0_0 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:41.365 Found net devices under 0000:31:00.1: cvl_0_1 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:41.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:41.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:41:41.365 00:41:41.365 --- 10.0.0.2 ping statistics --- 00:41:41.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:41.365 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:41.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:41.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:41:41.365 00:41:41.365 --- 10.0.0.1 ping statistics --- 00:41:41.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:41.365 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2843073 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2843073 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2843073 ']' 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:41.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:41.365 11:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:41.365 [2024-12-07 11:53:39.792781] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:41.365 [2024-12-07 11:53:39.795497] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:41.365 [2024-12-07 11:53:39.795601] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:41.365 [2024-12-07 11:53:39.946107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:41.365 [2024-12-07 11:53:40.051853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:41.365 [2024-12-07 11:53:40.051897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:41.365 [2024-12-07 11:53:40.051911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:41.365 [2024-12-07 11:53:40.051923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:41.365 [2024-12-07 11:53:40.051936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:41.365 [2024-12-07 11:53:40.054160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:41.365 [2024-12-07 11:53:40.054254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:41.365 [2024-12-07 11:53:40.054386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:41.365 [2024-12-07 11:53:40.054412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:41.365 [2024-12-07 11:53:40.054842] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:41.365 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:41.365 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:41:41.365 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:41.365 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:41.365 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:41.365 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:41.365 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:41:41.365 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.366 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:41.366 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.366 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:41:41.366 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.366 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:41.626 [2024-12-07 11:53:40.755041] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:41.626 [2024-12-07 11:53:40.755238] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:41.626 [2024-12-07 11:53:40.756632] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:41.626 [2024-12-07 11:53:40.756738] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:41.626 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.626 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:41.626 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.626 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:41.626 [2024-12-07 11:53:40.767438] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:41.626 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.626 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:41.627 Malloc0 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:41.627 [2024-12-07 11:53:40.887343] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2843286 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2843288 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:41.627 { 00:41:41.627 "params": { 00:41:41.627 "name": "Nvme$subsystem", 00:41:41.627 "trtype": "$TEST_TRANSPORT", 00:41:41.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:41.627 "adrfam": "ipv4", 00:41:41.627 "trsvcid": "$NVMF_PORT", 00:41:41.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:41.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:41.627 "hdgst": ${hdgst:-false}, 00:41:41.627 "ddgst": ${ddgst:-false} 00:41:41.627 }, 00:41:41.627 "method": "bdev_nvme_attach_controller" 00:41:41.627 } 00:41:41.627 EOF 00:41:41.627 )") 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2843290 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2843292 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:41.627 { 00:41:41.627 "params": { 00:41:41.627 "name": "Nvme$subsystem", 00:41:41.627 "trtype": "$TEST_TRANSPORT", 00:41:41.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:41.627 "adrfam": "ipv4", 00:41:41.627 "trsvcid": "$NVMF_PORT", 00:41:41.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:41.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:41.627 "hdgst": ${hdgst:-false}, 00:41:41.627 "ddgst": ${ddgst:-false} 00:41:41.627 }, 00:41:41.627 "method": "bdev_nvme_attach_controller" 00:41:41.627 } 00:41:41.627 EOF 00:41:41.627 )") 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:41.627 { 00:41:41.627 "params": { 00:41:41.627 "name": "Nvme$subsystem", 00:41:41.627 "trtype": "$TEST_TRANSPORT", 00:41:41.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:41.627 "adrfam": "ipv4", 00:41:41.627 "trsvcid": "$NVMF_PORT", 00:41:41.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:41.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:41.627 "hdgst": ${hdgst:-false}, 00:41:41.627 "ddgst": ${ddgst:-false} 00:41:41.627 }, 00:41:41.627 "method": "bdev_nvme_attach_controller" 00:41:41.627 } 00:41:41.627 EOF 00:41:41.627 )") 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:41.627 { 00:41:41.627 "params": { 00:41:41.627 "name": "Nvme$subsystem", 00:41:41.627 "trtype": "$TEST_TRANSPORT", 00:41:41.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:41.627 "adrfam": "ipv4", 00:41:41.627 "trsvcid": "$NVMF_PORT", 00:41:41.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:41.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:41.627 "hdgst": ${hdgst:-false}, 00:41:41.627 "ddgst": ${ddgst:-false} 00:41:41.627 }, 00:41:41.627 "method": "bdev_nvme_attach_controller" 00:41:41.627 } 00:41:41.627 EOF 00:41:41.627 )") 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2843286 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:41.627 "params": { 00:41:41.627 "name": "Nvme1", 00:41:41.627 "trtype": "tcp", 00:41:41.627 "traddr": "10.0.0.2", 00:41:41.627 "adrfam": "ipv4", 00:41:41.627 "trsvcid": "4420", 00:41:41.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:41.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:41.627 "hdgst": false, 00:41:41.627 "ddgst": false 00:41:41.627 }, 00:41:41.627 "method": "bdev_nvme_attach_controller" 00:41:41.627 }' 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:41.627 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:41.627 "params": { 00:41:41.627 "name": "Nvme1", 00:41:41.627 "trtype": "tcp", 00:41:41.627 "traddr": "10.0.0.2", 00:41:41.627 "adrfam": "ipv4", 00:41:41.627 "trsvcid": "4420", 00:41:41.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:41.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:41.628 "hdgst": false, 00:41:41.628 "ddgst": false 00:41:41.628 }, 00:41:41.628 "method": "bdev_nvme_attach_controller" 00:41:41.628 }' 00:41:41.628 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:41.628 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:41.628 "params": { 00:41:41.628 "name": "Nvme1", 00:41:41.628 "trtype": "tcp", 00:41:41.628 "traddr": "10.0.0.2", 00:41:41.628 "adrfam": "ipv4", 00:41:41.628 "trsvcid": "4420", 00:41:41.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:41.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:41.628 "hdgst": false, 00:41:41.628 "ddgst": false 00:41:41.628 }, 00:41:41.628 "method": "bdev_nvme_attach_controller" 00:41:41.628 }' 00:41:41.628 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:41.628 11:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:41.628 "params": { 00:41:41.628 "name": "Nvme1", 00:41:41.628 "trtype": "tcp", 00:41:41.628 "traddr": "10.0.0.2", 00:41:41.628 "adrfam": "ipv4", 00:41:41.628 "trsvcid": "4420", 00:41:41.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:41.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:41.628 "hdgst": false, 00:41:41.628 "ddgst": false 00:41:41.628 }, 00:41:41.628 "method": "bdev_nvme_attach_controller" 00:41:41.628 }' 00:41:41.628 [2024-12-07 11:53:40.971708] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:41.628 [2024-12-07 11:53:40.971813] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:41:41.628 [2024-12-07 11:53:40.971979] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:41.628 [2024-12-07 11:53:40.972092] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:41:41.628 [2024-12-07 11:53:40.974354] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:41.628 [2024-12-07 11:53:40.974447] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:41:41.888 [2024-12-07 11:53:40.982379] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:41.888 [2024-12-07 11:53:40.982472] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:41:41.888 [2024-12-07 11:53:41.167117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:41.888 [2024-12-07 11:53:41.209684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:42.148 [2024-12-07 11:53:41.258956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:42.148 [2024-12-07 11:53:41.263107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:41:42.148 [2024-12-07 11:53:41.304324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:41:42.148 [2024-12-07 11:53:41.326334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:42.148 [2024-12-07 11:53:41.353435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:41:42.148 [2024-12-07 11:53:41.424047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:41:42.407 Running I/O for 1 seconds... 00:41:42.407 Running I/O for 1 seconds... 00:41:42.668 Running I/O for 1 seconds... 00:41:42.668 Running I/O for 1 seconds... 00:41:43.608 11257.00 IOPS, 43.97 MiB/s 00:41:43.608 Latency(us) 00:41:43.608 [2024-12-07T10:53:42.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:43.608 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:41:43.608 Nvme1n1 : 1.01 11260.25 43.99 0.00 0.00 11292.62 3454.29 15291.73 00:41:43.608 [2024-12-07T10:53:42.962Z] =================================================================================================================== 00:41:43.608 [2024-12-07T10:53:42.962Z] Total : 11260.25 43.99 0.00 0.00 11292.62 3454.29 15291.73 00:41:43.608 166976.00 IOPS, 652.25 MiB/s 00:41:43.608 Latency(us) 00:41:43.608 [2024-12-07T10:53:42.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:43.608 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:41:43.608 Nvme1n1 : 1.00 166625.42 650.88 0.00 0.00 763.85 332.80 2061.65 00:41:43.608 [2024-12-07T10:53:42.962Z] =================================================================================================================== 00:41:43.608 [2024-12-07T10:53:42.962Z] Total : 166625.42 650.88 0.00 0.00 763.85 332.80 2061.65 00:41:43.608 10286.00 IOPS, 40.18 MiB/s 00:41:43.608 Latency(us) 00:41:43.608 [2024-12-07T10:53:42.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:43.608 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:41:43.608 Nvme1n1 : 1.01 10371.05 40.51 0.00 0.00 12313.31 3085.65 21080.75 00:41:43.608 [2024-12-07T10:53:42.962Z] =================================================================================================================== 00:41:43.608 [2024-12-07T10:53:42.962Z] Total : 10371.05 40.51 0.00 0.00 12313.31 3085.65 21080.75 00:41:43.608 12458.00 IOPS, 48.66 MiB/s 00:41:43.608 Latency(us) 00:41:43.608 [2024-12-07T10:53:42.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:43.608 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:41:43.608 Nvme1n1 : 1.01 12527.35 48.93 0.00 0.00 10182.97 2239.15 16384.00 00:41:43.608 [2024-12-07T10:53:42.962Z] =================================================================================================================== 00:41:43.608 [2024-12-07T10:53:42.962Z] Total : 12527.35 48.93 0.00 0.00 10182.97 2239.15 16384.00 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2843288 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2843290 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2843292 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:44.180 rmmod nvme_tcp 00:41:44.180 rmmod nvme_fabrics 00:41:44.180 rmmod nvme_keyring 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2843073 ']' 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2843073 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2843073 ']' 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2843073 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2843073 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2843073' 00:41:44.180 killing process with pid 2843073 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2843073 00:41:44.180 11:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2843073 00:41:45.122 11:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:45.122 11:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:45.122 11:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:45.122 11:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:41:45.122 11:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:41:45.122 11:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:45.122 11:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:41:45.122 11:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:45.122 11:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:45.122 11:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:45.122 11:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:45.122 11:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:47.032 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:47.032 00:41:47.032 real 0m14.115s 00:41:47.032 user 0m20.802s 00:41:47.032 sys 0m7.812s 00:41:47.032 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:47.032 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:47.032 ************************************ 00:41:47.032 END TEST nvmf_bdev_io_wait 00:41:47.032 ************************************ 00:41:47.032 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:47.032 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:47.032 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:47.032 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:47.032 ************************************ 00:41:47.032 START TEST nvmf_queue_depth 00:41:47.032 ************************************ 00:41:47.032 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:47.294 * Looking for test storage... 00:41:47.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:47.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:47.294 --rc genhtml_branch_coverage=1 00:41:47.294 --rc genhtml_function_coverage=1 00:41:47.294 --rc genhtml_legend=1 00:41:47.294 --rc geninfo_all_blocks=1 00:41:47.294 --rc geninfo_unexecuted_blocks=1 00:41:47.294 00:41:47.294 ' 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:47.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:47.294 --rc genhtml_branch_coverage=1 00:41:47.294 --rc genhtml_function_coverage=1 00:41:47.294 --rc genhtml_legend=1 00:41:47.294 --rc geninfo_all_blocks=1 00:41:47.294 --rc geninfo_unexecuted_blocks=1 00:41:47.294 00:41:47.294 ' 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:47.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:47.294 --rc genhtml_branch_coverage=1 00:41:47.294 --rc genhtml_function_coverage=1 00:41:47.294 --rc genhtml_legend=1 00:41:47.294 --rc geninfo_all_blocks=1 00:41:47.294 --rc geninfo_unexecuted_blocks=1 00:41:47.294 00:41:47.294 ' 00:41:47.294 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:47.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:47.294 --rc genhtml_branch_coverage=1 00:41:47.294 --rc genhtml_function_coverage=1 00:41:47.294 --rc genhtml_legend=1 00:41:47.294 --rc geninfo_all_blocks=1 00:41:47.295 --rc geninfo_unexecuted_blocks=1 00:41:47.295 00:41:47.295 ' 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:41:47.295 11:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:55.427 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:55.427 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:55.427 Found net devices under 0000:31:00.0: cvl_0_0 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:55.427 Found net devices under 0000:31:00.1: cvl_0_1 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:55.427 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:55.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:55.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:41:55.428 00:41:55.428 --- 10.0.0.2 ping statistics --- 00:41:55.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:55.428 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:55.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:55.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:41:55.428 00:41:55.428 --- 10.0.0.1 ping statistics --- 00:41:55.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:55.428 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2848050 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2848050 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2848050 ']' 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:55.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:55.428 11:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:55.428 [2024-12-07 11:53:53.839616] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:55.428 [2024-12-07 11:53:53.841903] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:55.428 [2024-12-07 11:53:53.841987] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:55.428 [2024-12-07 11:53:53.998123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:55.428 [2024-12-07 11:53:54.097288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:55.428 [2024-12-07 11:53:54.097328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:55.428 [2024-12-07 11:53:54.097342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:55.428 [2024-12-07 11:53:54.097354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:55.428 [2024-12-07 11:53:54.097369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:55.428 [2024-12-07 11:53:54.098557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:55.428 [2024-12-07 11:53:54.342344] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:55.428 [2024-12-07 11:53:54.342636] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:55.428 [2024-12-07 11:53:54.643694] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:55.428 Malloc0 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:55.428 [2024-12-07 11:53:54.767429] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2848392 00:41:55.428 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:55.429 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:41:55.429 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2848392 /var/tmp/bdevperf.sock 00:41:55.429 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2848392 ']' 00:41:55.429 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:55.429 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:55.429 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:55.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:55.429 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:55.429 11:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:55.689 [2024-12-07 11:53:54.858531] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:41:55.689 [2024-12-07 11:53:54.858646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848392 ] 00:41:55.689 [2024-12-07 11:53:54.986080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:55.949 [2024-12-07 11:53:55.083478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:56.519 11:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:56.519 11:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:56.519 11:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:56.519 11:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.519 11:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:56.519 NVMe0n1 00:41:56.519 11:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.519 11:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:56.779 Running I/O for 10 seconds... 00:41:58.663 8174.00 IOPS, 31.93 MiB/s [2024-12-07T10:53:58.959Z] 8192.00 IOPS, 32.00 MiB/s [2024-12-07T10:54:00.342Z] 8869.33 IOPS, 34.65 MiB/s [2024-12-07T10:54:01.285Z] 9312.25 IOPS, 36.38 MiB/s [2024-12-07T10:54:02.226Z] 9627.60 IOPS, 37.61 MiB/s [2024-12-07T10:54:03.168Z] 9814.67 IOPS, 38.34 MiB/s [2024-12-07T10:54:04.113Z] 9948.86 IOPS, 38.86 MiB/s [2024-12-07T10:54:05.058Z] 10052.38 IOPS, 39.27 MiB/s [2024-12-07T10:54:06.000Z] 10127.44 IOPS, 39.56 MiB/s [2024-12-07T10:54:06.264Z] 10209.70 IOPS, 39.88 MiB/s 00:42:06.910 Latency(us) 00:42:06.910 [2024-12-07T10:54:06.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:06.910 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:42:06.910 Verification LBA range: start 0x0 length 0x4000 00:42:06.910 NVMe0n1 : 10.07 10241.58 40.01 0.00 0.00 99575.92 26105.17 72089.60 00:42:06.910 [2024-12-07T10:54:06.264Z] =================================================================================================================== 00:42:06.910 [2024-12-07T10:54:06.264Z] Total : 10241.58 40.01 0.00 0.00 99575.92 26105.17 72089.60 00:42:06.910 { 00:42:06.910 "results": [ 00:42:06.910 { 00:42:06.910 "job": "NVMe0n1", 00:42:06.910 "core_mask": "0x1", 00:42:06.910 "workload": "verify", 00:42:06.910 "status": "finished", 00:42:06.910 "verify_range": { 00:42:06.910 "start": 0, 00:42:06.910 "length": 16384 00:42:06.910 }, 00:42:06.910 "queue_depth": 1024, 00:42:06.910 "io_size": 4096, 00:42:06.910 "runtime": 10.068857, 00:42:06.910 "iops": 10241.579555653636, 00:42:06.910 "mibps": 40.006170139272015, 00:42:06.910 "io_failed": 0, 00:42:06.910 "io_timeout": 0, 00:42:06.910 "avg_latency_us": 99575.92118307618, 00:42:06.910 "min_latency_us": 26105.173333333332, 00:42:06.910 "max_latency_us": 72089.6 00:42:06.910 } 00:42:06.910 ], 00:42:06.910 "core_count": 1 00:42:06.910 } 00:42:06.910 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2848392 00:42:06.910 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2848392 ']' 00:42:06.910 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2848392 00:42:06.910 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:42:06.910 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:06.910 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2848392 00:42:06.910 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:06.910 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:06.910 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2848392' 00:42:06.910 killing process with pid 2848392 00:42:06.910 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2848392 00:42:06.910 Received shutdown signal, test time was about 10.000000 seconds 00:42:06.910 00:42:06.910 Latency(us) 00:42:06.910 [2024-12-07T10:54:06.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:06.910 [2024-12-07T10:54:06.264Z] =================================================================================================================== 00:42:06.910 [2024-12-07T10:54:06.264Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:06.910 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2848392 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:07.485 rmmod nvme_tcp 00:42:07.485 rmmod nvme_fabrics 00:42:07.485 rmmod nvme_keyring 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2848050 ']' 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2848050 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2848050 ']' 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2848050 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:07.485 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2848050 00:42:07.746 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:07.746 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:07.746 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2848050' 00:42:07.746 killing process with pid 2848050 00:42:07.746 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2848050 00:42:07.746 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2848050 00:42:08.384 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:08.384 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:08.384 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:08.385 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:42:08.385 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:42:08.385 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:42:08.385 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:08.385 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:08.385 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:08.385 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:08.385 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:08.385 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:10.387 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:10.387 00:42:10.387 real 0m23.265s 00:42:10.387 user 0m26.005s 00:42:10.387 sys 0m7.516s 00:42:10.387 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:10.387 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:10.387 ************************************ 00:42:10.387 END TEST nvmf_queue_depth 00:42:10.387 ************************************ 00:42:10.387 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:42:10.387 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:10.387 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:10.387 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:10.387 ************************************ 00:42:10.387 START TEST nvmf_target_multipath 00:42:10.387 ************************************ 00:42:10.387 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:42:10.387 * Looking for test storage... 00:42:10.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:10.387 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:10.387 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:42:10.387 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:10.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.648 --rc genhtml_branch_coverage=1 00:42:10.648 --rc genhtml_function_coverage=1 00:42:10.648 --rc genhtml_legend=1 00:42:10.648 --rc geninfo_all_blocks=1 00:42:10.648 --rc geninfo_unexecuted_blocks=1 00:42:10.648 00:42:10.648 ' 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:10.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.648 --rc genhtml_branch_coverage=1 00:42:10.648 --rc genhtml_function_coverage=1 00:42:10.648 --rc genhtml_legend=1 00:42:10.648 --rc geninfo_all_blocks=1 00:42:10.648 --rc geninfo_unexecuted_blocks=1 00:42:10.648 00:42:10.648 ' 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:10.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.648 --rc genhtml_branch_coverage=1 00:42:10.648 --rc genhtml_function_coverage=1 00:42:10.648 --rc genhtml_legend=1 00:42:10.648 --rc geninfo_all_blocks=1 00:42:10.648 --rc geninfo_unexecuted_blocks=1 00:42:10.648 00:42:10.648 ' 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:10.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.648 --rc genhtml_branch_coverage=1 00:42:10.648 --rc genhtml_function_coverage=1 00:42:10.648 --rc genhtml_legend=1 00:42:10.648 --rc geninfo_all_blocks=1 00:42:10.648 --rc geninfo_unexecuted_blocks=1 00:42:10.648 00:42:10.648 ' 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:10.648 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:42:10.649 11:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:17.239 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:17.239 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:17.239 Found net devices under 0000:31:00.0: cvl_0_0 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.239 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:17.240 Found net devices under 0000:31:00.1: cvl_0_1 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:17.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:17.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:42:17.240 00:42:17.240 --- 10.0.0.2 ping statistics --- 00:42:17.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.240 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:17.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:17.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:42:17.240 00:42:17.240 --- 10.0.0.1 ping statistics --- 00:42:17.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.240 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:17.240 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:42:17.503 only one NIC for nvmf test 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:17.503 rmmod nvme_tcp 00:42:17.503 rmmod nvme_fabrics 00:42:17.503 rmmod nvme_keyring 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:17.503 11:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:19.420 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:19.682 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:19.682 00:42:19.682 real 0m9.131s 00:42:19.682 user 0m1.768s 00:42:19.682 sys 0m5.168s 00:42:19.682 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:19.682 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:19.682 ************************************ 00:42:19.682 END TEST nvmf_target_multipath 00:42:19.682 ************************************ 00:42:19.682 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:19.682 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:19.682 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:19.682 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:19.682 ************************************ 00:42:19.682 START TEST nvmf_zcopy 00:42:19.682 ************************************ 00:42:19.682 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:19.682 * Looking for test storage... 00:42:19.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:19.682 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:19.682 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:42:19.682 11:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:19.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.945 --rc genhtml_branch_coverage=1 00:42:19.945 --rc genhtml_function_coverage=1 00:42:19.945 --rc genhtml_legend=1 00:42:19.945 --rc geninfo_all_blocks=1 00:42:19.945 --rc geninfo_unexecuted_blocks=1 00:42:19.945 00:42:19.945 ' 00:42:19.945 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:19.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.945 --rc genhtml_branch_coverage=1 00:42:19.945 --rc genhtml_function_coverage=1 00:42:19.945 --rc genhtml_legend=1 00:42:19.945 --rc geninfo_all_blocks=1 00:42:19.945 --rc geninfo_unexecuted_blocks=1 00:42:19.945 00:42:19.945 ' 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:19.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.946 --rc genhtml_branch_coverage=1 00:42:19.946 --rc genhtml_function_coverage=1 00:42:19.946 --rc genhtml_legend=1 00:42:19.946 --rc geninfo_all_blocks=1 00:42:19.946 --rc geninfo_unexecuted_blocks=1 00:42:19.946 00:42:19.946 ' 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:19.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.946 --rc genhtml_branch_coverage=1 00:42:19.946 --rc genhtml_function_coverage=1 00:42:19.946 --rc genhtml_legend=1 00:42:19.946 --rc geninfo_all_blocks=1 00:42:19.946 --rc geninfo_unexecuted_blocks=1 00:42:19.946 00:42:19.946 ' 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:42:19.946 11:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:28.094 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:28.095 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:28.095 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:28.095 Found net devices under 0000:31:00.0: cvl_0_0 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:28.095 Found net devices under 0000:31:00.1: cvl_0_1 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:28.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:28.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:42:28.095 00:42:28.095 --- 10.0.0.2 ping statistics --- 00:42:28.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:28.095 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:28.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:28.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:42:28.095 00:42:28.095 --- 10.0.0.1 ping statistics --- 00:42:28.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:28.095 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2859419 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2859419 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2859419 ']' 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:28.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:28.095 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:28.096 [2024-12-07 11:54:26.465031] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:28.096 [2024-12-07 11:54:26.467327] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:42:28.096 [2024-12-07 11:54:26.467414] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:28.096 [2024-12-07 11:54:26.621019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:28.096 [2024-12-07 11:54:26.720057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:28.096 [2024-12-07 11:54:26.720097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:28.096 [2024-12-07 11:54:26.720111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:28.096 [2024-12-07 11:54:26.720123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:28.096 [2024-12-07 11:54:26.720135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:28.096 [2024-12-07 11:54:26.721342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:28.096 [2024-12-07 11:54:26.973110] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:28.096 [2024-12-07 11:54:26.973458] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:28.096 [2024-12-07 11:54:27.274576] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:28.096 [2024-12-07 11:54:27.298865] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:28.096 malloc0 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:28.096 { 00:42:28.096 "params": { 00:42:28.096 "name": "Nvme$subsystem", 00:42:28.096 "trtype": "$TEST_TRANSPORT", 00:42:28.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:28.096 "adrfam": "ipv4", 00:42:28.096 "trsvcid": "$NVMF_PORT", 00:42:28.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:28.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:28.096 "hdgst": ${hdgst:-false}, 00:42:28.096 "ddgst": ${ddgst:-false} 00:42:28.096 }, 00:42:28.096 "method": "bdev_nvme_attach_controller" 00:42:28.096 } 00:42:28.096 EOF 00:42:28.096 )") 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:28.096 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:28.096 "params": { 00:42:28.096 "name": "Nvme1", 00:42:28.096 "trtype": "tcp", 00:42:28.096 "traddr": "10.0.0.2", 00:42:28.096 "adrfam": "ipv4", 00:42:28.096 "trsvcid": "4420", 00:42:28.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:28.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:28.096 "hdgst": false, 00:42:28.096 "ddgst": false 00:42:28.096 }, 00:42:28.096 "method": "bdev_nvme_attach_controller" 00:42:28.096 }' 00:42:28.358 [2024-12-07 11:54:27.464278] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:42:28.358 [2024-12-07 11:54:27.464383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859766 ] 00:42:28.358 [2024-12-07 11:54:27.592346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:28.358 [2024-12-07 11:54:27.688673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:28.930 Running I/O for 10 seconds... 00:42:30.814 5953.00 IOPS, 46.51 MiB/s [2024-12-07T10:54:31.106Z] 6009.50 IOPS, 46.95 MiB/s [2024-12-07T10:54:32.489Z] 6026.33 IOPS, 47.08 MiB/s [2024-12-07T10:54:33.430Z] 6113.25 IOPS, 47.76 MiB/s [2024-12-07T10:54:34.375Z] 6636.60 IOPS, 51.85 MiB/s [2024-12-07T10:54:35.315Z] 6990.50 IOPS, 54.61 MiB/s [2024-12-07T10:54:36.255Z] 7238.00 IOPS, 56.55 MiB/s [2024-12-07T10:54:37.193Z] 7427.75 IOPS, 58.03 MiB/s [2024-12-07T10:54:38.132Z] 7575.78 IOPS, 59.19 MiB/s [2024-12-07T10:54:38.132Z] 7693.70 IOPS, 60.11 MiB/s 00:42:38.778 Latency(us) 00:42:38.778 [2024-12-07T10:54:38.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:38.778 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:42:38.778 Verification LBA range: start 0x0 length 0x1000 00:42:38.778 Nvme1n1 : 10.01 7696.77 60.13 0.00 0.00 16573.57 1297.07 29709.65 00:42:38.778 [2024-12-07T10:54:38.132Z] =================================================================================================================== 00:42:38.778 [2024-12-07T10:54:38.132Z] Total : 7696.77 60.13 0.00 0.00 16573.57 1297.07 29709.65 00:42:39.719 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2861780 00:42:39.719 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:42:39.719 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:39.719 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:42:39.719 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:42:39.719 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:39.719 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:39.719 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:39.719 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:39.719 { 00:42:39.719 "params": { 00:42:39.719 "name": "Nvme$subsystem", 00:42:39.719 "trtype": "$TEST_TRANSPORT", 00:42:39.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:39.719 "adrfam": "ipv4", 00:42:39.719 "trsvcid": "$NVMF_PORT", 00:42:39.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:39.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:39.719 "hdgst": ${hdgst:-false}, 00:42:39.719 "ddgst": ${ddgst:-false} 00:42:39.719 }, 00:42:39.719 "method": "bdev_nvme_attach_controller" 00:42:39.719 } 00:42:39.719 EOF 00:42:39.719 )") 00:42:39.719 [2024-12-07 11:54:38.705935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.705969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:39.719 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:39.719 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:39.719 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:39.719 "params": { 00:42:39.719 "name": "Nvme1", 00:42:39.719 "trtype": "tcp", 00:42:39.719 "traddr": "10.0.0.2", 00:42:39.719 "adrfam": "ipv4", 00:42:39.719 "trsvcid": "4420", 00:42:39.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:39.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:39.719 "hdgst": false, 00:42:39.719 "ddgst": false 00:42:39.719 }, 00:42:39.719 "method": "bdev_nvme_attach_controller" 00:42:39.719 }' 00:42:39.719 [2024-12-07 11:54:38.717906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.717927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.725881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.725897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.733887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.733902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.741888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.741907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.753878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.753893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.761898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.761914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.769885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.769900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.777497] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:42:39.719 [2024-12-07 11:54:38.777593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861780 ] 00:42:39.719 [2024-12-07 11:54:38.777878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.777893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.785891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.785911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.793877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.793892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.801893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.801908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.809890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.809906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.817875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.817890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.825885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.825900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.833884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.833899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.841878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.841893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.849886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.849901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.857893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.857910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.865887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.865901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.873887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.873902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.881875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.881893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.889886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.889900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.897887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.897902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.901773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:39.719 [2024-12-07 11:54:38.905878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.905893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.913887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.913902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.921879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.921893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.929887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.929902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.937886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.937900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.945873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.945888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.953889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.953904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.961888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.961902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.969872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.969887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.977885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.977899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.985875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.985889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.993883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:38.993898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:38.999283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:39.719 [2024-12-07 11:54:39.001886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:39.001901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:39.009875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:39.009890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:39.017890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:39.017906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:39.025885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:39.025902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:39.033873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:39.033888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:39.041891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:39.041905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:39.049884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.719 [2024-12-07 11:54:39.049899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.719 [2024-12-07 11:54:39.057883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.720 [2024-12-07 11:54:39.057900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.720 [2024-12-07 11:54:39.065886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.720 [2024-12-07 11:54:39.065901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.073877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.073893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.081887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.081902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.089884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.089899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.097878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.097893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.105889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.105905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.113879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.113896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.121885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.121901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.129887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.129902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.137879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.137894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.145892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.145907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.153887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.153903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.161877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.161892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.169888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.169904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.177877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.177894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.185885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.185900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.193886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.193901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.201873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.201888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.981 [2024-12-07 11:54:39.209888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.981 [2024-12-07 11:54:39.209903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.217884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.217899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.225874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.225889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.233885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.233900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.241888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.241903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.250302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.250319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.257893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.257911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.265881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.265897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.273889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.273905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.281886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.281902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.289884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.289899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.297891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.297906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.305874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.305889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.313886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.313902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.321889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.321905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:39.982 [2024-12-07 11:54:39.329901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:39.982 [2024-12-07 11:54:39.329919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.337889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.337905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.345886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.345902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.353874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.353889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.361896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.361911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.369876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.369891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.377888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.377903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.385885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.385901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.393875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.393890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.401885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.401901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.409885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.409901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.417878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.417894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.425892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.425907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.433876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.433890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.441884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.441899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.449884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.449899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.457885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.457901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.466166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.466193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.473889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.473906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.481892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.481911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 Running I/O for 5 seconds... 00:42:40.243 [2024-12-07 11:54:39.495874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.495896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.509986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.510007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.516998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.517024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.530720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.530740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.542600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.542619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.554974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.554994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.566193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.566212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.243 [2024-12-07 11:54:39.579999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.243 [2024-12-07 11:54:39.580024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.594722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.594742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.606787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.606807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.619225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.619245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.630119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.630139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.636667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.636686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.650792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.650812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.663105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.663124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.674028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.674048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.680710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.680729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.694905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.694924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.707581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.707600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.717636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.717655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.732038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.732057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.746566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.746586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.757614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.757633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.771656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.771675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.785841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.785862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.505 [2024-12-07 11:54:39.792864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.505 [2024-12-07 11:54:39.792883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.506 [2024-12-07 11:54:39.807358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.506 [2024-12-07 11:54:39.807378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.506 [2024-12-07 11:54:39.817987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.506 [2024-12-07 11:54:39.818006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.506 [2024-12-07 11:54:39.824501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.506 [2024-12-07 11:54:39.824519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.506 [2024-12-07 11:54:39.838765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.506 [2024-12-07 11:54:39.838784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.506 [2024-12-07 11:54:39.850745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.506 [2024-12-07 11:54:39.850765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.767 [2024-12-07 11:54:39.862934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.767 [2024-12-07 11:54:39.862954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.767 [2024-12-07 11:54:39.875175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.767 [2024-12-07 11:54:39.875194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.767 [2024-12-07 11:54:39.885992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.767 [2024-12-07 11:54:39.886019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.767 [2024-12-07 11:54:39.892376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.767 [2024-12-07 11:54:39.892394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.767 [2024-12-07 11:54:39.906624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.767 [2024-12-07 11:54:39.906643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.767 [2024-12-07 11:54:39.918645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.767 [2024-12-07 11:54:39.918666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.767 [2024-12-07 11:54:39.931215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.767 [2024-12-07 11:54:39.931234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.767 [2024-12-07 11:54:39.942068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.767 [2024-12-07 11:54:39.942087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.767 [2024-12-07 11:54:39.956149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.767 [2024-12-07 11:54:39.956168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.767 [2024-12-07 11:54:39.970495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.767 [2024-12-07 11:54:39.970514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.768 [2024-12-07 11:54:39.981875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.768 [2024-12-07 11:54:39.981893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.768 [2024-12-07 11:54:39.995671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.768 [2024-12-07 11:54:39.995691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.768 [2024-12-07 11:54:40.010686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.768 [2024-12-07 11:54:40.010706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.768 [2024-12-07 11:54:40.020563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.768 [2024-12-07 11:54:40.020582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.768 [2024-12-07 11:54:40.035237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.768 [2024-12-07 11:54:40.035256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.768 [2024-12-07 11:54:40.049988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.768 [2024-12-07 11:54:40.050007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.768 [2024-12-07 11:54:40.056632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.768 [2024-12-07 11:54:40.056650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.768 [2024-12-07 11:54:40.070678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.768 [2024-12-07 11:54:40.070696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.768 [2024-12-07 11:54:40.081991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.768 [2024-12-07 11:54:40.082018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.768 [2024-12-07 11:54:40.088856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.768 [2024-12-07 11:54:40.088875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.768 [2024-12-07 11:54:40.103033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.768 [2024-12-07 11:54:40.103052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:40.768 [2024-12-07 11:54:40.114025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:40.768 [2024-12-07 11:54:40.114043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.127842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.127861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.142636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.142655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.152834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.152854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.167073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.167093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.178367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.178385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.191686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.191704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.206407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.206425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.218663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.218682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.231113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.231133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.241950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.241968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.248377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.248396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.262780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.262798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.273796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.273815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.280075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.280093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.295061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.295080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.306357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.306375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.319309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.319327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.328566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.328584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.343036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.343055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.353514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.353532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.367354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.367373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.029 [2024-12-07 11:54:40.377643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.029 [2024-12-07 11:54:40.377665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.391388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.391408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.401891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.401910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.408592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.408611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.422640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.422659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.434525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.434543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.447752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.447771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.456766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.456784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.471342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.471360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.481018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.481036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 17000.00 IOPS, 132.81 MiB/s [2024-12-07T10:54:40.645Z] [2024-12-07 11:54:40.494902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.494921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.510434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.510452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.523449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.523467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.537814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.537832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.551386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.551404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.560367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.560386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.574957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.574976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.584185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.584204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.598733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.598752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.610016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.610040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.616658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.616676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.631007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.631030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.291 [2024-12-07 11:54:40.641715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.291 [2024-12-07 11:54:40.641733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.655726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.655745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.670328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.670346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.682854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.682873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.695323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.695341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.704190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.704208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.718548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.718566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.730132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.730151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.736524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.736542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.750565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.750584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.762627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.762646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.775006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.775031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.786236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.786254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.799300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.799318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.808416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.808435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.822817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.822836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.831931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.831954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.846492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.846510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.858577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.858595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.871592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.871610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.881877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.881896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.553 [2024-12-07 11:54:40.895826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.553 [2024-12-07 11:54:40.895845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.817 [2024-12-07 11:54:40.904718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:40.904738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:40.919002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:40.919027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:40.930195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:40.930214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:40.943733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:40.943752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:40.958597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:40.958615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:40.970085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:40.970103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:40.976463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:40.976480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:40.990817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:40.990836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.001744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.001763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.015339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.015358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.024458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.024477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.038978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.038998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.049634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.049653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.063725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.063744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.072571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.072591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.087168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.087187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.096743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.096762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.111497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.111516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.120552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.120572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.134796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.134816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.144787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.144806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:41.818 [2024-12-07 11:54:41.159571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:41.818 [2024-12-07 11:54:41.159590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.173506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.173526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.188119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.188138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.202556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.202575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.214233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.214251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.227086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.227105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.236411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.236430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.250757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.250776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.260810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.260829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.275176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.275195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.284238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.284256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.297827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.297846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.311976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.311995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.326185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.326203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.338570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.338589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.351656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.351675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.360398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.079 [2024-12-07 11:54:41.360417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.079 [2024-12-07 11:54:41.374755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.080 [2024-12-07 11:54:41.374774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.080 [2024-12-07 11:54:41.386827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.080 [2024-12-07 11:54:41.386846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.080 [2024-12-07 11:54:41.399098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.080 [2024-12-07 11:54:41.399117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.080 [2024-12-07 11:54:41.410068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.080 [2024-12-07 11:54:41.410087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.080 [2024-12-07 11:54:41.416478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.080 [2024-12-07 11:54:41.416497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.340 [2024-12-07 11:54:41.431081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.340 [2024-12-07 11:54:41.431101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.442310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.442327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.455495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.455514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.469588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.469606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.483826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.483845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 17046.50 IOPS, 133.18 MiB/s [2024-12-07T10:54:41.695Z] [2024-12-07 11:54:41.498132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.498151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.505137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.505155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.518706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.518729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.531287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.531305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.542292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.542311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.555938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.555956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.570544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.570562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.582588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.582607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.595089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.595108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.606255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.606274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.620154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.620173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.634564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.634582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.647253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.647272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.657509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.657528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.671705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.671724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.341 [2024-12-07 11:54:41.686498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.341 [2024-12-07 11:54:41.686517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.699360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.699379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.710066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.710084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.723894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.723913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.738638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.738657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.750438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.750456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.763340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.763364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.772450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.772468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.786631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.786649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.798854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.798873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.810342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.810360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.823405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.823424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.832341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.832359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.846664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.846682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.858425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.858443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.871589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.871608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.880472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.880491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.894997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.895024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.904006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.904031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.918559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.918577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.930233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.930250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.602 [2024-12-07 11:54:41.944098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.602 [2024-12-07 11:54:41.944117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:41.958507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:41.958526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:41.970396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:41.970414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:41.983614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:41.983633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:41.994294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:41.994316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.008125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.008144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.022378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.022396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.034501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.034519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.047489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.047508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.056533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.056552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.071110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.071128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.080133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.080152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.094590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.094609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.106062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.106081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.112598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.112616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.126588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.126606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.139131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.139149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.149877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.149900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.163704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.163722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.172684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.172702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.186967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.186985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.196307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.196326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.863 [2024-12-07 11:54:42.210788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:42.863 [2024-12-07 11:54:42.210807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.124 [2024-12-07 11:54:42.220187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.124 [2024-12-07 11:54:42.220209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.124 [2024-12-07 11:54:42.234871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.124 [2024-12-07 11:54:42.234890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.124 [2024-12-07 11:54:42.244080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.124 [2024-12-07 11:54:42.244098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.124 [2024-12-07 11:54:42.258342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.124 [2024-12-07 11:54:42.258360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.124 [2024-12-07 11:54:42.270990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.124 [2024-12-07 11:54:42.271008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.124 [2024-12-07 11:54:42.283362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.124 [2024-12-07 11:54:42.283380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.124 [2024-12-07 11:54:42.294088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.124 [2024-12-07 11:54:42.294106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.124 [2024-12-07 11:54:42.300724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.124 [2024-12-07 11:54:42.300742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.124 [2024-12-07 11:54:42.315196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.124 [2024-12-07 11:54:42.315214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.124 [2024-12-07 11:54:42.326066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.124 [2024-12-07 11:54:42.326084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.124 [2024-12-07 11:54:42.332881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.124 [2024-12-07 11:54:42.332899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.124 [2024-12-07 11:54:42.347281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.124 [2024-12-07 11:54:42.347299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.125 [2024-12-07 11:54:42.356164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.125 [2024-12-07 11:54:42.356182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.125 [2024-12-07 11:54:42.370725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.125 [2024-12-07 11:54:42.370744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.125 [2024-12-07 11:54:42.380743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.125 [2024-12-07 11:54:42.380761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.125 [2024-12-07 11:54:42.395146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.125 [2024-12-07 11:54:42.395164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.125 [2024-12-07 11:54:42.404040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.125 [2024-12-07 11:54:42.404058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.125 [2024-12-07 11:54:42.418336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.125 [2024-12-07 11:54:42.418354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.125 [2024-12-07 11:54:42.431466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.125 [2024-12-07 11:54:42.431484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.125 [2024-12-07 11:54:42.441562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.125 [2024-12-07 11:54:42.441584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.125 [2024-12-07 11:54:42.455892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.125 [2024-12-07 11:54:42.455910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.125 [2024-12-07 11:54:42.470113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.125 [2024-12-07 11:54:42.470131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.386 [2024-12-07 11:54:42.477390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.386 [2024-12-07 11:54:42.477408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.386 [2024-12-07 11:54:42.490691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.386 [2024-12-07 11:54:42.490709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.386 17057.33 IOPS, 133.26 MiB/s [2024-12-07T10:54:42.740Z] [2024-12-07 11:54:42.501367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.386 [2024-12-07 11:54:42.501385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.386 [2024-12-07 11:54:42.515503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.386 [2024-12-07 11:54:42.515522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.386 [2024-12-07 11:54:42.525061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.386 [2024-12-07 11:54:42.525080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.386 [2024-12-07 11:54:42.539391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.386 [2024-12-07 11:54:42.539408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.386 [2024-12-07 11:54:42.553543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.386 [2024-12-07 11:54:42.553561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.386 [2024-12-07 11:54:42.567023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.386 [2024-12-07 11:54:42.567041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.386 [2024-12-07 11:54:42.578428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.386 [2024-12-07 11:54:42.578446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.386 [2024-12-07 11:54:42.591336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.386 [2024-12-07 11:54:42.591354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.386 [2024-12-07 11:54:42.600287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.386 [2024-12-07 11:54:42.600306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.386 [2024-12-07 11:54:42.614770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.386 [2024-12-07 11:54:42.614789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.387 [2024-12-07 11:54:42.626500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.387 [2024-12-07 11:54:42.626519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.387 [2024-12-07 11:54:42.639625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.387 [2024-12-07 11:54:42.639644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.387 [2024-12-07 11:54:42.649680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.387 [2024-12-07 11:54:42.649699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.387 [2024-12-07 11:54:42.663326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.387 [2024-12-07 11:54:42.663345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.387 [2024-12-07 11:54:42.672260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.387 [2024-12-07 11:54:42.672279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.387 [2024-12-07 11:54:42.686513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.387 [2024-12-07 11:54:42.686531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.387 [2024-12-07 11:54:42.698350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.387 [2024-12-07 11:54:42.698368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.387 [2024-12-07 11:54:42.711498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.387 [2024-12-07 11:54:42.711517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.387 [2024-12-07 11:54:42.725526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.387 [2024-12-07 11:54:42.725544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.739881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.739900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.754891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.754909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.764088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.764107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.778643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.778661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.790983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.791002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.803351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.803369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.814210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.814229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.827885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.827904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.842110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.842129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.849191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.849209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.862942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.862961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.872054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.872073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.881959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.881979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.888780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.888799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.903037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.903056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.914401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.914419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.927232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.927251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.936227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.936245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.950739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.950757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.962265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.962283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.976073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.976092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.649 [2024-12-07 11:54:42.990565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.649 [2024-12-07 11:54:42.990583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.003993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.004019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.018417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.018436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.030704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.030723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.043499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.043518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.057614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.057633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.072190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.072209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.086562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.086581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.098187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.098205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.111246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.111265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.120180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.120198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.134221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.134244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.146714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.146732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.159515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.159534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.170167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.170185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.183694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.183713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.192849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.192867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.207244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.207263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.221752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.221772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.235483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.235502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.244556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.244575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:43.911 [2024-12-07 11:54:43.258742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:43.911 [2024-12-07 11:54:43.258760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.174 [2024-12-07 11:54:43.267931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.174 [2024-12-07 11:54:43.267950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.174 [2024-12-07 11:54:43.277871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.174 [2024-12-07 11:54:43.277890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.174 [2024-12-07 11:54:43.284510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.174 [2024-12-07 11:54:43.284528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.174 [2024-12-07 11:54:43.298722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.174 [2024-12-07 11:54:43.298741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.174 [2024-12-07 11:54:43.311319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.174 [2024-12-07 11:54:43.311339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.174 [2024-12-07 11:54:43.322136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.174 [2024-12-07 11:54:43.322154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.174 [2024-12-07 11:54:43.335546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.174 [2024-12-07 11:54:43.335565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.174 [2024-12-07 11:54:43.344619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.174 [2024-12-07 11:54:43.344638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.174 [2024-12-07 11:54:43.358837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.358860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 [2024-12-07 11:54:43.370187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.370205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 [2024-12-07 11:54:43.384176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.384195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 [2024-12-07 11:54:43.398317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.398335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 [2024-12-07 11:54:43.410584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.410603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 [2024-12-07 11:54:43.423353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.423372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 [2024-12-07 11:54:43.433970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.433990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 [2024-12-07 11:54:43.440825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.440843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 [2024-12-07 11:54:43.454885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.454904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 [2024-12-07 11:54:43.466330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.466349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 [2024-12-07 11:54:43.479098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.479117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 [2024-12-07 11:54:43.487993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.488017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 17080.75 IOPS, 133.44 MiB/s [2024-12-07T10:54:43.529Z] [2024-12-07 11:54:43.501916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.501935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 [2024-12-07 11:54:43.515395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.515414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.175 [2024-12-07 11:54:43.524382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.175 [2024-12-07 11:54:43.524401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.539065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.539084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.550416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.550435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.563114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.563133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.572891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.572910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.587394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.587418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.601619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.601638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.615140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.615158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.624051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.624071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.633710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.633729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.647717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.647736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.662391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.662410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.674459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.674477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.687597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.687616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.696754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.696773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.710934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.710953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.721868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.721887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.735767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.735786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.750265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.750283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.436 [2024-12-07 11:54:43.762943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.436 [2024-12-07 11:54:43.762962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.437 [2024-12-07 11:54:43.775356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.437 [2024-12-07 11:54:43.775374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.437 [2024-12-07 11:54:43.786075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.437 [2024-12-07 11:54:43.786093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.799823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.799842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.813844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.813863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.827640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.827659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.842438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.842458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.855338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.855356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.864385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.864404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.878722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.878741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.888104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.888123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.902755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.902773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.914188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.914205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.927909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.927927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.941802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.941820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.955084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.955103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.965905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.965924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.972529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.972547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.986424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.699 [2024-12-07 11:54:43.986443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.699 [2024-12-07 11:54:43.999493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.700 [2024-12-07 11:54:43.999511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.700 [2024-12-07 11:54:44.008383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.700 [2024-12-07 11:54:44.008402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.700 [2024-12-07 11:54:44.022624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.700 [2024-12-07 11:54:44.022642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.700 [2024-12-07 11:54:44.034180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.700 [2024-12-07 11:54:44.034198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.700 [2024-12-07 11:54:44.047679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.700 [2024-12-07 11:54:44.047698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.056575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.056593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.070613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.070631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.081988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.082007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.088541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.088559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.102654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.102672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.114720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.114741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.127195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.127214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.139441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.139460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.150270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.150288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.163618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.163636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.172672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.172690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.187273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.187292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.196885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.196904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.211463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.211482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.225703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.225721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.239529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.239548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.254204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.254223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.269816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.269837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.283537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.283556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.293762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.293781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:44.961 [2024-12-07 11:54:44.307364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:44.961 [2024-12-07 11:54:44.307383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.223 [2024-12-07 11:54:44.317449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.223 [2024-12-07 11:54:44.317468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.223 [2024-12-07 11:54:44.331207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.223 [2024-12-07 11:54:44.331226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.223 [2024-12-07 11:54:44.340333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.223 [2024-12-07 11:54:44.340352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.223 [2024-12-07 11:54:44.354756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.223 [2024-12-07 11:54:44.354775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.223 [2024-12-07 11:54:44.366058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.223 [2024-12-07 11:54:44.366076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.223 [2024-12-07 11:54:44.379805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.223 [2024-12-07 11:54:44.379824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.223 [2024-12-07 11:54:44.394548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.223 [2024-12-07 11:54:44.394566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.223 [2024-12-07 11:54:44.406742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.223 [2024-12-07 11:54:44.406760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.223 [2024-12-07 11:54:44.419005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.223 [2024-12-07 11:54:44.419029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.223 [2024-12-07 11:54:44.431306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.223 [2024-12-07 11:54:44.431325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 [2024-12-07 11:54:44.442183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.442201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 [2024-12-07 11:54:44.455959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.455978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 [2024-12-07 11:54:44.470944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.470962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 [2024-12-07 11:54:44.480042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.480061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 [2024-12-07 11:54:44.494635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.494654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 17085.00 IOPS, 133.48 MiB/s [2024-12-07T10:54:44.578Z] [2024-12-07 11:54:44.501880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.501897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 00:42:45.224 Latency(us) 00:42:45.224 [2024-12-07T10:54:44.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:45.224 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:42:45.224 Nvme1n1 : 5.00 17095.13 133.56 0.00 0.00 7481.68 2757.97 13707.95 00:42:45.224 [2024-12-07T10:54:44.578Z] =================================================================================================================== 00:42:45.224 [2024-12-07T10:54:44.578Z] Total : 17095.13 133.56 0.00 0.00 7481.68 2757.97 13707.95 00:42:45.224 [2024-12-07 11:54:44.509891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.509907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 [2024-12-07 11:54:44.517888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.517904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 [2024-12-07 11:54:44.525874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.525888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 [2024-12-07 11:54:44.533882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.533897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 [2024-12-07 11:54:44.541889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.541906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 [2024-12-07 11:54:44.549891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.549907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 [2024-12-07 11:54:44.557892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.557907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 [2024-12-07 11:54:44.565877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.565891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.224 [2024-12-07 11:54:44.573891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.224 [2024-12-07 11:54:44.573906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.581896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.581912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.589874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.589888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.597892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.597907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.605878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.605893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.613892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.613907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.621885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.621900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.629877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.629891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.637884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.637903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.645889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.645905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.653888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.653903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.661886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.661901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.669878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.669892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.677895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.677910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.685886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.685901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.693876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.693892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.701884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.701899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.709885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.709900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.717877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.717891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.725888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.725905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.733876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.733891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.741887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.741902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.749885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.749900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.757876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.757890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.765891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.765906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.773901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.773918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.486 [2024-12-07 11:54:44.781876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.486 [2024-12-07 11:54:44.781891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.487 [2024-12-07 11:54:44.789885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.487 [2024-12-07 11:54:44.789904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.487 [2024-12-07 11:54:44.797875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.487 [2024-12-07 11:54:44.797890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.487 [2024-12-07 11:54:44.805888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.487 [2024-12-07 11:54:44.805903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.487 [2024-12-07 11:54:44.813886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.487 [2024-12-07 11:54:44.813902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.487 [2024-12-07 11:54:44.821875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.487 [2024-12-07 11:54:44.821890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.487 [2024-12-07 11:54:44.829890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.487 [2024-12-07 11:54:44.829906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.837889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.837904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.845902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.845918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.853886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.853901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.861873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.861888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.869891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.869906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.877884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.877898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.885875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.885890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.893885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.893900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.901889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.901904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.909876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.909891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.917885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.917900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.925875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.925890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.933889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.933905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.941885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.941904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.949874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.949889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.957885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.957900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.965895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.965910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.973875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.973890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.981886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.981902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.989876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.989891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:44.997884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:44.997900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:45.005885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:45.005900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:45.013879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:45.013894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:45.021888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:45.021903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.749 [2024-12-07 11:54:45.029886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.749 [2024-12-07 11:54:45.029901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.750 [2024-12-07 11:54:45.037874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.750 [2024-12-07 11:54:45.037889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.750 [2024-12-07 11:54:45.045886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.750 [2024-12-07 11:54:45.045901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.750 [2024-12-07 11:54:45.053877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.750 [2024-12-07 11:54:45.053891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.750 [2024-12-07 11:54:45.061895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.750 [2024-12-07 11:54:45.061910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.750 [2024-12-07 11:54:45.069885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.750 [2024-12-07 11:54:45.069899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.750 [2024-12-07 11:54:45.077875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.750 [2024-12-07 11:54:45.077891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.750 [2024-12-07 11:54:45.085889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.750 [2024-12-07 11:54:45.085904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.750 [2024-12-07 11:54:45.093885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:45.750 [2024-12-07 11:54:45.093901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:46.010 [2024-12-07 11:54:45.101876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:46.010 [2024-12-07 11:54:45.101891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:46.010 [2024-12-07 11:54:45.109894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:46.010 [2024-12-07 11:54:45.109909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:46.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2861780) - No such process 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2861780 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:46.010 delay0 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.010 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:42:46.010 [2024-12-07 11:54:45.326237] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:54.144 Initializing NVMe Controllers 00:42:54.144 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:54.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:54.145 Initialization complete. Launching workers. 00:42:54.145 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4463 00:42:54.145 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 4744, failed to submit 39 00:42:54.145 success 4562, unsuccessful 182, failed 0 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:54.145 rmmod nvme_tcp 00:42:54.145 rmmod nvme_fabrics 00:42:54.145 rmmod nvme_keyring 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2859419 ']' 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2859419 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2859419 ']' 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2859419 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2859419 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2859419' 00:42:54.145 killing process with pid 2859419 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2859419 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2859419 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:54.145 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:56.057 00:42:56.057 real 0m36.167s 00:42:56.057 user 0m48.226s 00:42:56.057 sys 0m12.274s 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:56.057 ************************************ 00:42:56.057 END TEST nvmf_zcopy 00:42:56.057 ************************************ 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:56.057 ************************************ 00:42:56.057 START TEST nvmf_nmic 00:42:56.057 ************************************ 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:56.057 * Looking for test storage... 00:42:56.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:42:56.057 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:56.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:56.058 --rc genhtml_branch_coverage=1 00:42:56.058 --rc genhtml_function_coverage=1 00:42:56.058 --rc genhtml_legend=1 00:42:56.058 --rc geninfo_all_blocks=1 00:42:56.058 --rc geninfo_unexecuted_blocks=1 00:42:56.058 00:42:56.058 ' 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:56.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:56.058 --rc genhtml_branch_coverage=1 00:42:56.058 --rc genhtml_function_coverage=1 00:42:56.058 --rc genhtml_legend=1 00:42:56.058 --rc geninfo_all_blocks=1 00:42:56.058 --rc geninfo_unexecuted_blocks=1 00:42:56.058 00:42:56.058 ' 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:56.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:56.058 --rc genhtml_branch_coverage=1 00:42:56.058 --rc genhtml_function_coverage=1 00:42:56.058 --rc genhtml_legend=1 00:42:56.058 --rc geninfo_all_blocks=1 00:42:56.058 --rc geninfo_unexecuted_blocks=1 00:42:56.058 00:42:56.058 ' 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:56.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:56.058 --rc genhtml_branch_coverage=1 00:42:56.058 --rc genhtml_function_coverage=1 00:42:56.058 --rc genhtml_legend=1 00:42:56.058 --rc geninfo_all_blocks=1 00:42:56.058 --rc geninfo_unexecuted_blocks=1 00:42:56.058 00:42:56.058 ' 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:42:56.058 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:04.204 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:04.205 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:04.205 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:04.205 Found net devices under 0000:31:00.0: cvl_0_0 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:04.205 Found net devices under 0000:31:00.1: cvl_0_1 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:04.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:04.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:43:04.205 00:43:04.205 --- 10.0.0.2 ping statistics --- 00:43:04.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:04.205 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:04.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:04.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:43:04.205 00:43:04.205 --- 10.0.0.1 ping statistics --- 00:43:04.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:04.205 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2868514 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2868514 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2868514 ']' 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:04.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:04.205 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:04.205 [2024-12-07 11:55:02.468557] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:04.205 [2024-12-07 11:55:02.471224] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:43:04.205 [2024-12-07 11:55:02.471328] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:04.206 [2024-12-07 11:55:02.622485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:04.206 [2024-12-07 11:55:02.725027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:04.206 [2024-12-07 11:55:02.725069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:04.206 [2024-12-07 11:55:02.725083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:04.206 [2024-12-07 11:55:02.725093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:04.206 [2024-12-07 11:55:02.725104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:04.206 [2024-12-07 11:55:02.727333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:04.206 [2024-12-07 11:55:02.727417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:04.206 [2024-12-07 11:55:02.727531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:04.206 [2024-12-07 11:55:02.727558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:04.206 [2024-12-07 11:55:02.988043] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:04.206 [2024-12-07 11:55:02.988240] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:04.206 [2024-12-07 11:55:02.989477] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:04.206 [2024-12-07 11:55:02.989554] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:04.206 [2024-12-07 11:55:02.989742] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:04.206 [2024-12-07 11:55:03.276711] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:04.206 Malloc0 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:04.206 [2024-12-07 11:55:03.384505] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:43:04.206 test case1: single bdev can't be used in multiple subsystems 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:04.206 [2024-12-07 11:55:03.420187] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:43:04.206 [2024-12-07 11:55:03.420222] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:43:04.206 [2024-12-07 11:55:03.420234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:04.206 request: 00:43:04.206 { 00:43:04.206 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:43:04.206 "namespace": { 00:43:04.206 "bdev_name": "Malloc0", 00:43:04.206 "no_auto_visible": false, 00:43:04.206 "hide_metadata": false 00:43:04.206 }, 00:43:04.206 "method": "nvmf_subsystem_add_ns", 00:43:04.206 "req_id": 1 00:43:04.206 } 00:43:04.206 Got JSON-RPC error response 00:43:04.206 response: 00:43:04.206 { 00:43:04.206 "code": -32602, 00:43:04.206 "message": "Invalid parameters" 00:43:04.206 } 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:43:04.206 Adding namespace failed - expected result. 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:43:04.206 test case2: host connect to nvmf target in multiple paths 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:04.206 [2024-12-07 11:55:03.432298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.206 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:04.779 11:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:43:05.040 11:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:43:05.040 11:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:43:05.040 11:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:05.040 11:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:43:05.040 11:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:43:06.951 11:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:06.951 11:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:06.951 11:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:06.951 11:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:43:06.951 11:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:06.951 11:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:43:06.951 11:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:07.244 [global] 00:43:07.244 thread=1 00:43:07.244 invalidate=1 00:43:07.244 rw=write 00:43:07.244 time_based=1 00:43:07.244 runtime=1 00:43:07.244 ioengine=libaio 00:43:07.244 direct=1 00:43:07.244 bs=4096 00:43:07.244 iodepth=1 00:43:07.244 norandommap=0 00:43:07.244 numjobs=1 00:43:07.244 00:43:07.244 verify_dump=1 00:43:07.244 verify_backlog=512 00:43:07.244 verify_state_save=0 00:43:07.244 do_verify=1 00:43:07.244 verify=crc32c-intel 00:43:07.244 [job0] 00:43:07.244 filename=/dev/nvme0n1 00:43:07.244 Could not set queue depth (nvme0n1) 00:43:07.506 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:07.506 fio-3.35 00:43:07.506 Starting 1 thread 00:43:08.889 00:43:08.889 job0: (groupid=0, jobs=1): err= 0: pid=2869502: Sat Dec 7 11:55:07 2024 00:43:08.889 read: IOPS=19, BW=78.8KiB/s (80.7kB/s)(80.0KiB/1015msec) 00:43:08.889 slat (nsec): min=9367, max=27835, avg=26549.80, stdev=4049.33 00:43:08.889 clat (usec): min=804, max=40988, avg=38940.71, stdev=8976.55 00:43:08.889 lat (usec): min=813, max=41015, avg=38967.26, stdev=8980.60 00:43:08.889 clat percentiles (usec): 00:43:08.889 | 1.00th=[ 807], 5.00th=[ 807], 10.00th=[40633], 20.00th=[41157], 00:43:08.889 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:08.889 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:08.889 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:08.889 | 99.99th=[41157] 00:43:08.889 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:43:08.889 slat (usec): min=9, max=31957, avg=92.51, stdev=1411.06 00:43:08.889 clat (usec): min=194, max=636, avg=358.75, stdev=97.13 00:43:08.889 lat (usec): min=223, max=32589, avg=451.26, stdev=1426.63 00:43:08.889 clat percentiles (usec): 00:43:08.889 | 1.00th=[ 200], 5.00th=[ 219], 10.00th=[ 233], 20.00th=[ 285], 00:43:08.889 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 338], 60.00th=[ 396], 00:43:08.889 | 70.00th=[ 412], 80.00th=[ 441], 90.00th=[ 498], 95.00th=[ 537], 00:43:08.889 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 635], 99.95th=[ 635], 00:43:08.889 | 99.99th=[ 635] 00:43:08.889 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:43:08.889 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:08.889 lat (usec) : 250=14.47%, 500=72.56%, 750=9.21%, 1000=0.19% 00:43:08.889 lat (msec) : 50=3.57% 00:43:08.889 cpu : usr=1.18%, sys=1.78%, ctx=536, majf=0, minf=1 00:43:08.889 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:08.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.889 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.889 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:08.889 00:43:08.889 Run status group 0 (all jobs): 00:43:08.889 READ: bw=78.8KiB/s (80.7kB/s), 78.8KiB/s-78.8KiB/s (80.7kB/s-80.7kB/s), io=80.0KiB (81.9kB), run=1015-1015msec 00:43:08.889 WRITE: bw=2018KiB/s (2066kB/s), 2018KiB/s-2018KiB/s (2066kB/s-2066kB/s), io=2048KiB (2097kB), run=1015-1015msec 00:43:08.889 00:43:08.889 Disk stats (read/write): 00:43:08.889 nvme0n1: ios=42/512, merge=0/0, ticks=1623/141, in_queue=1764, util=98.60% 00:43:08.889 11:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:09.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:09.190 rmmod nvme_tcp 00:43:09.190 rmmod nvme_fabrics 00:43:09.190 rmmod nvme_keyring 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2868514 ']' 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2868514 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2868514 ']' 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2868514 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868514 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868514' 00:43:09.190 killing process with pid 2868514 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2868514 00:43:09.190 11:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2868514 00:43:10.215 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:10.215 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:10.215 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:10.215 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:43:10.215 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:43:10.215 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:10.215 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:43:10.215 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:10.215 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:10.215 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:10.215 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:10.215 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:12.132 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:12.132 00:43:12.132 real 0m16.343s 00:43:12.132 user 0m33.944s 00:43:12.132 sys 0m7.431s 00:43:12.132 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:12.132 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:12.132 ************************************ 00:43:12.132 END TEST nvmf_nmic 00:43:12.132 ************************************ 00:43:12.392 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:43:12.392 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:12.392 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:12.392 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:12.392 ************************************ 00:43:12.392 START TEST nvmf_fio_target 00:43:12.392 ************************************ 00:43:12.392 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:43:12.392 * Looking for test storage... 00:43:12.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:12.392 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:12.392 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:43:12.392 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:12.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:12.393 --rc genhtml_branch_coverage=1 00:43:12.393 --rc genhtml_function_coverage=1 00:43:12.393 --rc genhtml_legend=1 00:43:12.393 --rc geninfo_all_blocks=1 00:43:12.393 --rc geninfo_unexecuted_blocks=1 00:43:12.393 00:43:12.393 ' 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:12.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:12.393 --rc genhtml_branch_coverage=1 00:43:12.393 --rc genhtml_function_coverage=1 00:43:12.393 --rc genhtml_legend=1 00:43:12.393 --rc geninfo_all_blocks=1 00:43:12.393 --rc geninfo_unexecuted_blocks=1 00:43:12.393 00:43:12.393 ' 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:12.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:12.393 --rc genhtml_branch_coverage=1 00:43:12.393 --rc genhtml_function_coverage=1 00:43:12.393 --rc genhtml_legend=1 00:43:12.393 --rc geninfo_all_blocks=1 00:43:12.393 --rc geninfo_unexecuted_blocks=1 00:43:12.393 00:43:12.393 ' 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:12.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:12.393 --rc genhtml_branch_coverage=1 00:43:12.393 --rc genhtml_function_coverage=1 00:43:12.393 --rc genhtml_legend=1 00:43:12.393 --rc geninfo_all_blocks=1 00:43:12.393 --rc geninfo_unexecuted_blocks=1 00:43:12.393 00:43:12.393 ' 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:12.393 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:43:12.655 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:43:19.266 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:19.267 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:19.267 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:19.267 Found net devices under 0000:31:00.0: cvl_0_0 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:19.267 Found net devices under 0000:31:00.1: cvl_0_1 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:19.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:19.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:43:19.267 00:43:19.267 --- 10.0.0.2 ping statistics --- 00:43:19.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:19.267 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:19.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:19.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:43:19.267 00:43:19.267 --- 10.0.0.1 ping statistics --- 00:43:19.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:19.267 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:19.267 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2874125 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2874125 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2874125 ']' 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:19.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:19.268 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:43:19.529 [2024-12-07 11:55:18.696193] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:19.529 [2024-12-07 11:55:18.698507] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:43:19.529 [2024-12-07 11:55:18.698594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:19.529 [2024-12-07 11:55:18.834382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:19.791 [2024-12-07 11:55:18.933289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:19.791 [2024-12-07 11:55:18.933331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:19.791 [2024-12-07 11:55:18.933345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:19.791 [2024-12-07 11:55:18.933354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:19.791 [2024-12-07 11:55:18.933365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:19.791 [2024-12-07 11:55:18.935578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:19.791 [2024-12-07 11:55:18.935663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:19.791 [2024-12-07 11:55:18.935778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:19.791 [2024-12-07 11:55:18.935803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:20.071 [2024-12-07 11:55:19.195787] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:20.071 [2024-12-07 11:55:19.196049] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:20.071 [2024-12-07 11:55:19.197332] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:20.071 [2024-12-07 11:55:19.197362] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:20.071 [2024-12-07 11:55:19.197668] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:20.332 11:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:20.332 11:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:43:20.332 11:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:20.332 11:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:20.332 11:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:20.332 11:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:20.332 11:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:20.332 [2024-12-07 11:55:19.632566] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:20.332 11:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:20.594 11:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:43:20.594 11:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:20.855 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:43:20.855 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:21.116 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:43:21.116 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:21.377 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:43:21.377 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:43:21.638 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:21.898 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:43:21.898 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:22.158 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:43:22.158 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:22.418 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:43:22.418 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:43:22.418 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:22.679 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:22.679 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:22.942 11:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:22.942 11:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:22.942 11:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:23.203 [2024-12-07 11:55:22.368743] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:23.203 11:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:43:23.463 11:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:43:23.463 11:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:24.035 11:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:43:24.035 11:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:43:24.035 11:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:24.035 11:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:43:24.035 11:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:43:24.035 11:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:43:25.950 11:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:25.950 11:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:25.950 11:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:25.950 11:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:43:25.950 11:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:25.950 11:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:43:25.950 11:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:25.950 [global] 00:43:25.950 thread=1 00:43:25.950 invalidate=1 00:43:25.950 rw=write 00:43:25.950 time_based=1 00:43:25.950 runtime=1 00:43:25.950 ioengine=libaio 00:43:25.950 direct=1 00:43:25.950 bs=4096 00:43:25.950 iodepth=1 00:43:25.950 norandommap=0 00:43:25.950 numjobs=1 00:43:25.950 00:43:25.950 verify_dump=1 00:43:25.950 verify_backlog=512 00:43:25.950 verify_state_save=0 00:43:25.950 do_verify=1 00:43:25.950 verify=crc32c-intel 00:43:25.950 [job0] 00:43:25.950 filename=/dev/nvme0n1 00:43:25.950 [job1] 00:43:25.950 filename=/dev/nvme0n2 00:43:25.950 [job2] 00:43:25.950 filename=/dev/nvme0n3 00:43:25.950 [job3] 00:43:25.950 filename=/dev/nvme0n4 00:43:25.950 Could not set queue depth (nvme0n1) 00:43:25.950 Could not set queue depth (nvme0n2) 00:43:25.950 Could not set queue depth (nvme0n3) 00:43:25.950 Could not set queue depth (nvme0n4) 00:43:26.529 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:26.529 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:26.529 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:26.529 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:26.529 fio-3.35 00:43:26.529 Starting 4 threads 00:43:27.915 00:43:27.915 job0: (groupid=0, jobs=1): err= 0: pid=2875703: Sat Dec 7 11:55:26 2024 00:43:27.915 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:43:27.915 slat (nsec): min=6912, max=59924, avg=22982.80, stdev=8407.92 00:43:27.915 clat (usec): min=167, max=1599, avg=562.90, stdev=153.18 00:43:27.915 lat (usec): min=174, max=1625, avg=585.88, stdev=154.56 00:43:27.915 clat percentiles (usec): 00:43:27.915 | 1.00th=[ 245], 5.00th=[ 412], 10.00th=[ 445], 20.00th=[ 486], 00:43:27.916 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 562], 00:43:27.916 | 70.00th=[ 578], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 971], 00:43:27.916 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1532], 99.95th=[ 1598], 00:43:27.916 | 99.99th=[ 1598] 00:43:27.916 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:43:27.916 slat (nsec): min=9940, max=69061, avg=29444.20, stdev=10714.63 00:43:27.916 clat (usec): min=121, max=868, avg=346.47, stdev=86.16 00:43:27.916 lat (usec): min=131, max=901, avg=375.91, stdev=87.60 00:43:27.916 clat percentiles (usec): 00:43:27.916 | 1.00th=[ 143], 5.00th=[ 239], 10.00th=[ 251], 20.00th=[ 273], 00:43:27.916 | 30.00th=[ 302], 40.00th=[ 338], 50.00th=[ 355], 60.00th=[ 367], 00:43:27.916 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 424], 95.00th=[ 465], 00:43:27.916 | 99.00th=[ 652], 99.50th=[ 701], 99.90th=[ 865], 99.95th=[ 865], 00:43:27.916 | 99.99th=[ 865] 00:43:27.916 bw ( KiB/s): min= 4160, max= 4160, per=38.50%, avg=4160.00, stdev= 0.00, samples=1 00:43:27.916 iops : min= 1040, max= 1040, avg=1040.00, stdev= 0.00, samples=1 00:43:27.916 lat (usec) : 250=5.47%, 500=54.05%, 750=37.45%, 1000=0.73% 00:43:27.916 lat (msec) : 2=2.29% 00:43:27.916 cpu : usr=3.00%, sys=5.50%, ctx=2051, majf=0, minf=1 00:43:27.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:27.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.916 issued rwts: total=1024,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:27.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:27.916 job1: (groupid=0, jobs=1): err= 0: pid=2875704: Sat Dec 7 11:55:26 2024 00:43:27.916 read: IOPS=155, BW=623KiB/s (638kB/s)(624KiB/1001msec) 00:43:27.916 slat (nsec): min=7523, max=45423, avg=26078.26, stdev=4820.22 00:43:27.916 clat (usec): min=546, max=41934, avg=4858.34, stdev=11885.54 00:43:27.916 lat (usec): min=572, max=41961, avg=4884.42, stdev=11885.97 00:43:27.916 clat percentiles (usec): 00:43:27.916 | 1.00th=[ 578], 5.00th=[ 701], 10.00th=[ 807], 20.00th=[ 922], 00:43:27.916 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1029], 60.00th=[ 1074], 00:43:27.916 | 70.00th=[ 1090], 80.00th=[ 1139], 90.00th=[ 1237], 95.00th=[41157], 00:43:27.916 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:43:27.916 | 99.99th=[41681] 00:43:27.916 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:43:27.916 slat (nsec): min=9934, max=63504, avg=28064.67, stdev=11170.81 00:43:27.916 clat (usec): min=157, max=625, avg=428.89, stdev=68.41 00:43:27.916 lat (usec): min=170, max=659, avg=456.96, stdev=74.41 00:43:27.916 clat percentiles (usec): 00:43:27.916 | 1.00th=[ 273], 5.00th=[ 322], 10.00th=[ 334], 20.00th=[ 359], 00:43:27.916 | 30.00th=[ 392], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 461], 00:43:27.916 | 70.00th=[ 474], 80.00th=[ 482], 90.00th=[ 506], 95.00th=[ 519], 00:43:27.916 | 99.00th=[ 545], 99.50th=[ 594], 99.90th=[ 627], 99.95th=[ 627], 00:43:27.916 | 99.99th=[ 627] 00:43:27.916 bw ( KiB/s): min= 4096, max= 4096, per=37.90%, avg=4096.00, stdev= 0.00, samples=1 00:43:27.916 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:27.916 lat (usec) : 250=0.30%, 500=67.66%, 750=10.03%, 1000=8.08% 00:43:27.916 lat (msec) : 2=11.68%, 50=2.25% 00:43:27.916 cpu : usr=1.10%, sys=1.60%, ctx=669, majf=0, minf=2 00:43:27.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:27.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.916 issued rwts: total=156,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:27.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:27.916 job2: (groupid=0, jobs=1): err= 0: pid=2875705: Sat Dec 7 11:55:26 2024 00:43:27.916 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:27.916 slat (nsec): min=8202, max=59379, avg=26639.77, stdev=3697.27 00:43:27.916 clat (usec): min=780, max=41458, avg=1185.16, stdev=1786.00 00:43:27.916 lat (usec): min=806, max=41487, avg=1211.80, stdev=1786.10 00:43:27.916 clat percentiles (usec): 00:43:27.916 | 1.00th=[ 881], 5.00th=[ 938], 10.00th=[ 988], 20.00th=[ 1037], 00:43:27.916 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:43:27.916 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1221], 00:43:27.916 | 99.00th=[ 1516], 99.50th=[ 1614], 99.90th=[41681], 99.95th=[41681], 00:43:27.916 | 99.99th=[41681] 00:43:27.916 write: IOPS=658, BW=2633KiB/s (2697kB/s)(2636KiB/1001msec); 0 zone resets 00:43:27.916 slat (nsec): min=10119, max=53642, avg=28246.39, stdev=11325.84 00:43:27.916 clat (usec): min=187, max=993, avg=533.25, stdev=149.03 00:43:27.916 lat (usec): min=199, max=1028, avg=561.50, stdev=151.55 00:43:27.916 clat percentiles (usec): 00:43:27.916 | 1.00th=[ 225], 5.00th=[ 322], 10.00th=[ 359], 20.00th=[ 412], 00:43:27.916 | 30.00th=[ 453], 40.00th=[ 478], 50.00th=[ 515], 60.00th=[ 553], 00:43:27.916 | 70.00th=[ 586], 80.00th=[ 635], 90.00th=[ 742], 95.00th=[ 832], 00:43:27.916 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 996], 99.95th=[ 996], 00:43:27.916 | 99.99th=[ 996] 00:43:27.916 bw ( KiB/s): min= 4096, max= 4096, per=37.90%, avg=4096.00, stdev= 0.00, samples=1 00:43:27.916 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:27.916 lat (usec) : 250=0.60%, 500=25.28%, 750=25.19%, 1000=10.59% 00:43:27.916 lat (msec) : 2=38.26%, 50=0.09% 00:43:27.916 cpu : usr=1.70%, sys=3.30%, ctx=1172, majf=0, minf=1 00:43:27.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:27.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.916 issued rwts: total=512,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:27.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:27.916 job3: (groupid=0, jobs=1): err= 0: pid=2875706: Sat Dec 7 11:55:26 2024 00:43:27.916 read: IOPS=15, BW=63.9KiB/s (65.4kB/s)(64.0KiB/1002msec) 00:43:27.916 slat (nsec): min=26412, max=27404, avg=26752.62, stdev=262.17 00:43:27.916 clat (usec): min=41027, max=42128, avg=41830.29, stdev=311.78 00:43:27.916 lat (usec): min=41054, max=42155, avg=41857.04, stdev=311.73 00:43:27.916 clat percentiles (usec): 00:43:27.916 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:43:27.916 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:43:27.916 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:27.916 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:27.916 | 99.99th=[42206] 00:43:27.916 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:43:27.916 slat (nsec): min=10340, max=71482, avg=33681.56, stdev=7898.43 00:43:27.916 clat (usec): min=211, max=1034, avg=606.79, stdev=134.16 00:43:27.916 lat (usec): min=223, max=1069, avg=640.47, stdev=135.76 00:43:27.916 clat percentiles (usec): 00:43:27.916 | 1.00th=[ 293], 5.00th=[ 375], 10.00th=[ 424], 20.00th=[ 494], 00:43:27.916 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 652], 00:43:27.916 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 816], 00:43:27.916 | 99.00th=[ 914], 99.50th=[ 963], 99.90th=[ 1037], 99.95th=[ 1037], 00:43:27.916 | 99.99th=[ 1037] 00:43:27.916 bw ( KiB/s): min= 4096, max= 4096, per=37.90%, avg=4096.00, stdev= 0.00, samples=1 00:43:27.916 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:27.916 lat (usec) : 250=0.38%, 500=20.83%, 750=61.74%, 1000=13.64% 00:43:27.916 lat (msec) : 2=0.38%, 50=3.03% 00:43:27.916 cpu : usr=0.70%, sys=1.80%, ctx=529, majf=0, minf=1 00:43:27.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:27.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.916 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:27.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:27.916 00:43:27.916 Run status group 0 (all jobs): 00:43:27.916 READ: bw=6818KiB/s (6982kB/s), 63.9KiB/s-4092KiB/s (65.4kB/s-4190kB/s), io=6832KiB (6996kB), run=1001-1002msec 00:43:27.916 WRITE: bw=10.6MiB/s (11.1MB/s), 2044KiB/s-4092KiB/s (2093kB/s-4190kB/s), io=10.6MiB (11.1MB), run=1001-1002msec 00:43:27.916 00:43:27.916 Disk stats (read/write): 00:43:27.916 nvme0n1: ios=825/1024, merge=0/0, ticks=490/360, in_queue=850, util=87.17% 00:43:27.916 nvme0n2: ios=85/512, merge=0/0, ticks=1431/211, in_queue=1642, util=87.64% 00:43:27.916 nvme0n3: ios=491/512, merge=0/0, ticks=1380/265, in_queue=1645, util=91.73% 00:43:27.916 nvme0n4: ios=36/512, merge=0/0, ticks=1339/310, in_queue=1649, util=94.21% 00:43:27.916 11:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:27.916 [global] 00:43:27.916 thread=1 00:43:27.916 invalidate=1 00:43:27.916 rw=randwrite 00:43:27.916 time_based=1 00:43:27.916 runtime=1 00:43:27.916 ioengine=libaio 00:43:27.916 direct=1 00:43:27.916 bs=4096 00:43:27.916 iodepth=1 00:43:27.916 norandommap=0 00:43:27.916 numjobs=1 00:43:27.916 00:43:27.916 verify_dump=1 00:43:27.916 verify_backlog=512 00:43:27.916 verify_state_save=0 00:43:27.916 do_verify=1 00:43:27.916 verify=crc32c-intel 00:43:27.916 [job0] 00:43:27.916 filename=/dev/nvme0n1 00:43:27.916 [job1] 00:43:27.916 filename=/dev/nvme0n2 00:43:27.916 [job2] 00:43:27.916 filename=/dev/nvme0n3 00:43:27.916 [job3] 00:43:27.916 filename=/dev/nvme0n4 00:43:27.916 Could not set queue depth (nvme0n1) 00:43:27.916 Could not set queue depth (nvme0n2) 00:43:27.916 Could not set queue depth (nvme0n3) 00:43:27.916 Could not set queue depth (nvme0n4) 00:43:28.178 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:28.178 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:28.178 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:28.178 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:28.178 fio-3.35 00:43:28.178 Starting 4 threads 00:43:29.572 00:43:29.572 job0: (groupid=0, jobs=1): err= 0: pid=2876224: Sat Dec 7 11:55:28 2024 00:43:29.572 read: IOPS=782, BW=3131KiB/s (3206kB/s)(3184KiB/1017msec) 00:43:29.572 slat (nsec): min=6967, max=59343, avg=22983.64, stdev=8712.00 00:43:29.572 clat (usec): min=192, max=42033, avg=792.88, stdev=3266.58 00:43:29.572 lat (usec): min=219, max=42061, avg=815.86, stdev=3267.29 00:43:29.572 clat percentiles (usec): 00:43:29.572 | 1.00th=[ 334], 5.00th=[ 408], 10.00th=[ 441], 20.00th=[ 469], 00:43:29.572 | 30.00th=[ 494], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 562], 00:43:29.573 | 70.00th=[ 578], 80.00th=[ 594], 90.00th=[ 619], 95.00th=[ 652], 00:43:29.573 | 99.00th=[ 709], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:43:29.573 | 99.99th=[42206] 00:43:29.573 write: IOPS=1006, BW=4028KiB/s (4124kB/s)(4096KiB/1017msec); 0 zone resets 00:43:29.573 slat (nsec): min=9502, max=66790, avg=25686.15, stdev=11501.29 00:43:29.573 clat (usec): min=104, max=786, avg=320.05, stdev=104.06 00:43:29.573 lat (usec): min=113, max=821, avg=345.73, stdev=110.35 00:43:29.573 clat percentiles (usec): 00:43:29.573 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 133], 20.00th=[ 241], 00:43:29.573 | 30.00th=[ 273], 40.00th=[ 306], 50.00th=[ 351], 60.00th=[ 371], 00:43:29.573 | 70.00th=[ 383], 80.00th=[ 404], 90.00th=[ 429], 95.00th=[ 453], 00:43:29.573 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 685], 99.95th=[ 783], 00:43:29.573 | 99.99th=[ 783] 00:43:29.573 bw ( KiB/s): min= 3104, max= 5088, per=41.44%, avg=4096.00, stdev=1402.90, samples=2 00:43:29.573 iops : min= 776, max= 1272, avg=1024.00, stdev=350.72, samples=2 00:43:29.573 lat (usec) : 250=12.14%, 500=56.98%, 750=30.49%, 1000=0.11% 00:43:29.573 lat (msec) : 50=0.27% 00:43:29.573 cpu : usr=2.46%, sys=4.43%, ctx=1823, majf=0, minf=1 00:43:29.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:29.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.573 issued rwts: total=796,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:29.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:29.573 job1: (groupid=0, jobs=1): err= 0: pid=2876225: Sat Dec 7 11:55:28 2024 00:43:29.573 read: IOPS=17, BW=69.5KiB/s (71.2kB/s)(72.0KiB/1036msec) 00:43:29.573 slat (nsec): min=25481, max=43630, avg=26794.50, stdev=4206.69 00:43:29.573 clat (usec): min=1004, max=42133, avg=39668.17, stdev=9650.00 00:43:29.573 lat (usec): min=1031, max=42159, avg=39694.97, stdev=9650.10 00:43:29.573 clat percentiles (usec): 00:43:29.573 | 1.00th=[ 1004], 5.00th=[ 1004], 10.00th=[41681], 20.00th=[41681], 00:43:29.573 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:43:29.573 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:29.573 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:29.573 | 99.99th=[42206] 00:43:29.573 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:43:29.573 slat (nsec): min=9649, max=59159, avg=31981.27, stdev=7244.16 00:43:29.573 clat (usec): min=142, max=1014, avg=585.94, stdev=139.64 00:43:29.573 lat (usec): min=154, max=1046, avg=617.92, stdev=140.59 00:43:29.573 clat percentiles (usec): 00:43:29.573 | 1.00th=[ 273], 5.00th=[ 355], 10.00th=[ 412], 20.00th=[ 469], 00:43:29.573 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 627], 00:43:29.573 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 766], 95.00th=[ 807], 00:43:29.573 | 99.00th=[ 922], 99.50th=[ 955], 99.90th=[ 1012], 99.95th=[ 1012], 00:43:29.573 | 99.99th=[ 1012] 00:43:29.573 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:43:29.573 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:29.573 lat (usec) : 250=0.57%, 500=24.34%, 750=60.75%, 1000=10.75% 00:43:29.573 lat (msec) : 2=0.38%, 50=3.21% 00:43:29.573 cpu : usr=0.58%, sys=1.84%, ctx=531, majf=0, minf=1 00:43:29.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:29.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.573 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:29.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:29.573 job2: (groupid=0, jobs=1): err= 0: pid=2876226: Sat Dec 7 11:55:28 2024 00:43:29.573 read: IOPS=92, BW=372KiB/s (381kB/s)(372KiB/1001msec) 00:43:29.573 slat (nsec): min=9534, max=40786, avg=26693.77, stdev=2346.74 00:43:29.573 clat (usec): min=797, max=42020, avg=7352.91, stdev=14674.46 00:43:29.573 lat (usec): min=824, max=42047, avg=7379.61, stdev=14674.05 00:43:29.573 clat percentiles (usec): 00:43:29.573 | 1.00th=[ 799], 5.00th=[ 881], 10.00th=[ 938], 20.00th=[ 971], 00:43:29.573 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1020], 60.00th=[ 1045], 00:43:29.573 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[41157], 95.00th=[41681], 00:43:29.573 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:29.573 | 99.99th=[42206] 00:43:29.573 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:43:29.573 slat (nsec): min=9860, max=52970, avg=30349.39, stdev=8949.37 00:43:29.573 clat (usec): min=212, max=942, avg=573.07, stdev=127.47 00:43:29.573 lat (usec): min=225, max=975, avg=603.42, stdev=130.68 00:43:29.573 clat percentiles (usec): 00:43:29.573 | 1.00th=[ 265], 5.00th=[ 363], 10.00th=[ 408], 20.00th=[ 469], 00:43:29.573 | 30.00th=[ 498], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 611], 00:43:29.573 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 775], 00:43:29.573 | 99.00th=[ 865], 99.50th=[ 906], 99.90th=[ 947], 99.95th=[ 947], 00:43:29.573 | 99.99th=[ 947] 00:43:29.573 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:43:29.573 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:29.573 lat (usec) : 250=0.83%, 500=25.29%, 750=52.56%, 1000=11.57% 00:43:29.573 lat (msec) : 2=7.27%, 50=2.48% 00:43:29.573 cpu : usr=1.00%, sys=1.70%, ctx=606, majf=0, minf=1 00:43:29.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:29.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.573 issued rwts: total=93,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:29.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:29.573 job3: (groupid=0, jobs=1): err= 0: pid=2876227: Sat Dec 7 11:55:28 2024 00:43:29.573 read: IOPS=184, BW=739KiB/s (757kB/s)(740KiB/1001msec) 00:43:29.573 slat (nsec): min=7503, max=45044, avg=26131.36, stdev=5051.05 00:43:29.573 clat (usec): min=621, max=42010, avg=3658.60, stdev=9901.92 00:43:29.573 lat (usec): min=629, max=42023, avg=3684.73, stdev=9901.41 00:43:29.573 clat percentiles (usec): 00:43:29.573 | 1.00th=[ 652], 5.00th=[ 725], 10.00th=[ 816], 20.00th=[ 922], 00:43:29.573 | 30.00th=[ 1004], 40.00th=[ 1074], 50.00th=[ 1123], 60.00th=[ 1139], 00:43:29.573 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1270], 95.00th=[41157], 00:43:29.573 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:29.573 | 99.99th=[42206] 00:43:29.573 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:43:29.573 slat (nsec): min=10279, max=66646, avg=32463.07, stdev=7352.41 00:43:29.573 clat (usec): min=159, max=1032, avg=578.16, stdev=149.12 00:43:29.573 lat (usec): min=170, max=1066, avg=610.62, stdev=150.68 00:43:29.573 clat percentiles (usec): 00:43:29.573 | 1.00th=[ 277], 5.00th=[ 330], 10.00th=[ 371], 20.00th=[ 449], 00:43:29.573 | 30.00th=[ 506], 40.00th=[ 537], 50.00th=[ 586], 60.00th=[ 619], 00:43:29.573 | 70.00th=[ 652], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 832], 00:43:29.573 | 99.00th=[ 922], 99.50th=[ 963], 99.90th=[ 1037], 99.95th=[ 1037], 00:43:29.573 | 99.99th=[ 1037] 00:43:29.573 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:43:29.573 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:29.573 lat (usec) : 250=0.72%, 500=20.80%, 750=43.90%, 1000=15.78% 00:43:29.573 lat (msec) : 2=17.07%, 50=1.72% 00:43:29.573 cpu : usr=0.90%, sys=2.30%, ctx=701, majf=0, minf=1 00:43:29.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:29.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.573 issued rwts: total=185,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:29.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:29.573 00:43:29.573 Run status group 0 (all jobs): 00:43:29.573 READ: bw=4216KiB/s (4317kB/s), 69.5KiB/s-3131KiB/s (71.2kB/s-3206kB/s), io=4368KiB (4473kB), run=1001-1036msec 00:43:29.573 WRITE: bw=9884KiB/s (10.1MB/s), 1977KiB/s-4028KiB/s (2024kB/s-4124kB/s), io=10.0MiB (10.5MB), run=1001-1036msec 00:43:29.573 00:43:29.573 Disk stats (read/write): 00:43:29.573 nvme0n1: ios=833/1024, merge=0/0, ticks=453/304, in_queue=757, util=87.07% 00:43:29.573 nvme0n2: ios=55/512, merge=0/0, ticks=606/276, in_queue=882, util=91.18% 00:43:29.573 nvme0n3: ios=63/512, merge=0/0, ticks=654/276, in_queue=930, util=93.95% 00:43:29.573 nvme0n4: ios=189/512, merge=0/0, ticks=706/272, in_queue=978, util=95.06% 00:43:29.573 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:29.573 [global] 00:43:29.573 thread=1 00:43:29.573 invalidate=1 00:43:29.573 rw=write 00:43:29.573 time_based=1 00:43:29.573 runtime=1 00:43:29.573 ioengine=libaio 00:43:29.573 direct=1 00:43:29.573 bs=4096 00:43:29.573 iodepth=128 00:43:29.573 norandommap=0 00:43:29.573 numjobs=1 00:43:29.573 00:43:29.573 verify_dump=1 00:43:29.573 verify_backlog=512 00:43:29.573 verify_state_save=0 00:43:29.573 do_verify=1 00:43:29.573 verify=crc32c-intel 00:43:29.573 [job0] 00:43:29.573 filename=/dev/nvme0n1 00:43:29.573 [job1] 00:43:29.573 filename=/dev/nvme0n2 00:43:29.573 [job2] 00:43:29.573 filename=/dev/nvme0n3 00:43:29.573 [job3] 00:43:29.573 filename=/dev/nvme0n4 00:43:29.573 Could not set queue depth (nvme0n1) 00:43:29.573 Could not set queue depth (nvme0n2) 00:43:29.573 Could not set queue depth (nvme0n3) 00:43:29.573 Could not set queue depth (nvme0n4) 00:43:29.833 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:29.833 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:29.833 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:29.833 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:29.833 fio-3.35 00:43:29.833 Starting 4 threads 00:43:31.252 00:43:31.252 job0: (groupid=0, jobs=1): err= 0: pid=2876724: Sat Dec 7 11:55:30 2024 00:43:31.252 read: IOPS=6212, BW=24.3MiB/s (25.4MB/s)(25.4MiB/1046msec) 00:43:31.252 slat (nsec): min=906, max=15388k, avg=61793.77, stdev=543434.36 00:43:31.252 clat (usec): min=1716, max=60564, avg=10189.14, stdev=7610.27 00:43:31.252 lat (usec): min=1719, max=60570, avg=10250.93, stdev=7633.50 00:43:31.252 clat percentiles (usec): 00:43:31.252 | 1.00th=[ 3032], 5.00th=[ 4621], 10.00th=[ 6194], 20.00th=[ 6783], 00:43:31.252 | 30.00th=[ 7177], 40.00th=[ 7635], 50.00th=[ 8586], 60.00th=[ 8979], 00:43:31.252 | 70.00th=[ 9896], 80.00th=[11469], 90.00th=[15401], 95.00th=[17695], 00:43:31.252 | 99.00th=[55313], 99.50th=[55837], 99.90th=[60556], 99.95th=[60556], 00:43:31.252 | 99.99th=[60556] 00:43:31.252 write: IOPS=6363, BW=24.9MiB/s (26.1MB/s)(26.0MiB/1046msec); 0 zone resets 00:43:31.252 slat (nsec): min=1585, max=12911k, avg=66143.47, stdev=487064.78 00:43:31.252 clat (usec): min=556, max=66180, avg=10006.04, stdev=8841.04 00:43:31.252 lat (usec): min=572, max=66190, avg=10072.18, stdev=8899.62 00:43:31.252 clat percentiles (usec): 00:43:31.252 | 1.00th=[ 1975], 5.00th=[ 3425], 10.00th=[ 4293], 20.00th=[ 5669], 00:43:31.252 | 30.00th=[ 6325], 40.00th=[ 6849], 50.00th=[ 7439], 60.00th=[ 8291], 00:43:31.252 | 70.00th=[ 9110], 80.00th=[11600], 90.00th=[16712], 95.00th=[25822], 00:43:31.252 | 99.00th=[54789], 99.50th=[58459], 99.90th=[66323], 99.95th=[66323], 00:43:31.252 | 99.99th=[66323] 00:43:31.252 bw ( KiB/s): min=24432, max=28816, per=30.66%, avg=26624.00, stdev=3099.96, samples=2 00:43:31.252 iops : min= 6108, max= 7204, avg=6656.00, stdev=774.99, samples=2 00:43:31.252 lat (usec) : 750=0.02%, 1000=0.03% 00:43:31.252 lat (msec) : 2=0.49%, 4=5.53%, 10=65.62%, 20=22.90%, 50=3.62% 00:43:31.252 lat (msec) : 100=1.79% 00:43:31.252 cpu : usr=3.44%, sys=6.99%, ctx=443, majf=0, minf=1 00:43:31.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:43:31.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:31.252 issued rwts: total=6498,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:31.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:31.252 job1: (groupid=0, jobs=1): err= 0: pid=2876750: Sat Dec 7 11:55:30 2024 00:43:31.252 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:43:31.252 slat (nsec): min=895, max=16094k, avg=79176.67, stdev=686098.52 00:43:31.252 clat (usec): min=3102, max=33388, avg=10231.40, stdev=4286.58 00:43:31.252 lat (usec): min=3107, max=33412, avg=10310.58, stdev=4348.77 00:43:31.252 clat percentiles (usec): 00:43:31.252 | 1.00th=[ 4490], 5.00th=[ 5669], 10.00th=[ 6390], 20.00th=[ 6783], 00:43:31.252 | 30.00th=[ 7373], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10421], 00:43:31.252 | 70.00th=[10814], 80.00th=[12649], 90.00th=[15926], 95.00th=[18220], 00:43:31.252 | 99.00th=[26084], 99.50th=[29754], 99.90th=[32375], 99.95th=[32375], 00:43:31.252 | 99.99th=[33424] 00:43:31.252 write: IOPS=5732, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1006msec); 0 zone resets 00:43:31.252 slat (nsec): min=1587, max=9834.3k, avg=81322.74, stdev=502809.18 00:43:31.252 clat (usec): min=573, max=47118, avg=12118.58, stdev=8871.07 00:43:31.252 lat (usec): min=583, max=47127, avg=12199.90, stdev=8935.74 00:43:31.252 clat percentiles (usec): 00:43:31.252 | 1.00th=[ 2966], 5.00th=[ 4178], 10.00th=[ 5080], 20.00th=[ 5800], 00:43:31.252 | 30.00th=[ 6521], 40.00th=[ 7635], 50.00th=[ 8717], 60.00th=[10552], 00:43:31.252 | 70.00th=[12649], 80.00th=[15401], 90.00th=[26084], 95.00th=[30802], 00:43:31.252 | 99.00th=[43254], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:43:31.252 | 99.99th=[46924] 00:43:31.252 bw ( KiB/s): min=21360, max=23760, per=25.98%, avg=22560.00, stdev=1697.06, samples=2 00:43:31.252 iops : min= 5340, max= 5940, avg=5640.00, stdev=424.26, samples=2 00:43:31.252 lat (usec) : 750=0.03%, 1000=0.04% 00:43:31.252 lat (msec) : 2=0.22%, 4=2.26%, 10=55.02%, 20=31.93%, 50=10.49% 00:43:31.252 cpu : usr=3.78%, sys=6.37%, ctx=402, majf=0, minf=1 00:43:31.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:43:31.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:31.252 issued rwts: total=5632,5767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:31.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:31.252 job2: (groupid=0, jobs=1): err= 0: pid=2876753: Sat Dec 7 11:55:30 2024 00:43:31.252 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:43:31.252 slat (nsec): min=942, max=12552k, avg=90108.45, stdev=618171.20 00:43:31.252 clat (usec): min=3693, max=31933, avg=12043.43, stdev=3838.20 00:43:31.252 lat (usec): min=3703, max=42296, avg=12133.54, stdev=3886.44 00:43:31.252 clat percentiles (usec): 00:43:31.252 | 1.00th=[ 5604], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[ 9110], 00:43:31.252 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11207], 60.00th=[12125], 00:43:31.252 | 70.00th=[13304], 80.00th=[13960], 90.00th=[16450], 95.00th=[19006], 00:43:31.252 | 99.00th=[24511], 99.50th=[29754], 99.90th=[31851], 99.95th=[31851], 00:43:31.252 | 99.99th=[31851] 00:43:31.252 write: IOPS=5440, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1005msec); 0 zone resets 00:43:31.252 slat (nsec): min=1579, max=10078k, avg=92965.07, stdev=611286.76 00:43:31.252 clat (usec): min=2985, max=36681, avg=11962.26, stdev=5358.96 00:43:31.252 lat (usec): min=3978, max=36692, avg=12055.23, stdev=5408.28 00:43:31.252 clat percentiles (usec): 00:43:31.252 | 1.00th=[ 4555], 5.00th=[ 5407], 10.00th=[ 7832], 20.00th=[ 8586], 00:43:31.252 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[11338], 00:43:31.252 | 70.00th=[12518], 80.00th=[14615], 90.00th=[19006], 95.00th=[22938], 00:43:31.252 | 99.00th=[32637], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:43:31.252 | 99.99th=[36439] 00:43:31.252 bw ( KiB/s): min=20320, max=22400, per=24.60%, avg=21360.00, stdev=1470.78, samples=2 00:43:31.252 iops : min= 5080, max= 5600, avg=5340.00, stdev=367.70, samples=2 00:43:31.252 lat (msec) : 4=0.20%, 10=40.19%, 20=52.67%, 50=6.94% 00:43:31.252 cpu : usr=4.08%, sys=6.27%, ctx=305, majf=0, minf=1 00:43:31.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:31.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:31.252 issued rwts: total=5120,5468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:31.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:31.252 job3: (groupid=0, jobs=1): err= 0: pid=2876754: Sat Dec 7 11:55:30 2024 00:43:31.252 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:43:31.252 slat (nsec): min=924, max=11568k, avg=104585.32, stdev=728506.61 00:43:31.252 clat (usec): min=4354, max=46156, avg=14612.39, stdev=7485.99 00:43:31.252 lat (usec): min=4359, max=49397, avg=14716.98, stdev=7550.59 00:43:31.252 clat percentiles (usec): 00:43:31.252 | 1.00th=[ 6325], 5.00th=[ 7046], 10.00th=[ 7701], 20.00th=[ 8356], 00:43:31.252 | 30.00th=[ 9110], 40.00th=[10552], 50.00th=[12256], 60.00th=[14484], 00:43:31.253 | 70.00th=[16909], 80.00th=[20579], 90.00th=[25035], 95.00th=[29754], 00:43:31.253 | 99.00th=[39584], 99.50th=[40633], 99.90th=[42730], 99.95th=[46400], 00:43:31.253 | 99.99th=[46400] 00:43:31.253 write: IOPS=4788, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1006msec); 0 zone resets 00:43:31.253 slat (nsec): min=1575, max=9016.9k, avg=102610.91, stdev=585322.08 00:43:31.253 clat (usec): min=759, max=42772, avg=12344.30, stdev=6773.59 00:43:31.253 lat (usec): min=4467, max=42781, avg=12446.91, stdev=6824.94 00:43:31.253 clat percentiles (usec): 00:43:31.253 | 1.00th=[ 5145], 5.00th=[ 6915], 10.00th=[ 7701], 20.00th=[ 8291], 00:43:31.253 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9765], 60.00th=[10552], 00:43:31.253 | 70.00th=[12649], 80.00th=[15008], 90.00th=[22938], 95.00th=[30016], 00:43:31.253 | 99.00th=[35390], 99.50th=[39060], 99.90th=[41681], 99.95th=[42730], 00:43:31.253 | 99.99th=[42730] 00:43:31.253 bw ( KiB/s): min=17032, max=20480, per=21.60%, avg=18756.00, stdev=2438.10, samples=2 00:43:31.253 iops : min= 4258, max= 5120, avg=4689.00, stdev=609.53, samples=2 00:43:31.253 lat (usec) : 1000=0.01% 00:43:31.253 lat (msec) : 10=43.96%, 20=39.77%, 50=16.27% 00:43:31.253 cpu : usr=3.08%, sys=5.37%, ctx=331, majf=0, minf=1 00:43:31.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:43:31.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:31.253 issued rwts: total=4608,4817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:31.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:31.253 00:43:31.253 Run status group 0 (all jobs): 00:43:31.253 READ: bw=81.6MiB/s (85.6MB/s), 17.9MiB/s-24.3MiB/s (18.8MB/s-25.4MB/s), io=85.4MiB (89.5MB), run=1005-1046msec 00:43:31.253 WRITE: bw=84.8MiB/s (88.9MB/s), 18.7MiB/s-24.9MiB/s (19.6MB/s-26.1MB/s), io=88.7MiB (93.0MB), run=1005-1046msec 00:43:31.253 00:43:31.253 Disk stats (read/write): 00:43:31.253 nvme0n1: ios=6547/6656, merge=0/0, ticks=56808/64700, in_queue=121508, util=87.01% 00:43:31.253 nvme0n2: ios=4137/4351, merge=0/0, ticks=41326/53843, in_queue=95169, util=91.74% 00:43:31.253 nvme0n3: ios=4126/4608, merge=0/0, ticks=22446/25342, in_queue=47788, util=86.72% 00:43:31.253 nvme0n4: ios=3584/3873, merge=0/0, ticks=20563/20263, in_queue=40826, util=88.98% 00:43:31.253 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:31.253 [global] 00:43:31.253 thread=1 00:43:31.253 invalidate=1 00:43:31.253 rw=randwrite 00:43:31.253 time_based=1 00:43:31.253 runtime=1 00:43:31.253 ioengine=libaio 00:43:31.253 direct=1 00:43:31.253 bs=4096 00:43:31.253 iodepth=128 00:43:31.253 norandommap=0 00:43:31.253 numjobs=1 00:43:31.253 00:43:31.253 verify_dump=1 00:43:31.253 verify_backlog=512 00:43:31.253 verify_state_save=0 00:43:31.253 do_verify=1 00:43:31.253 verify=crc32c-intel 00:43:31.253 [job0] 00:43:31.253 filename=/dev/nvme0n1 00:43:31.253 [job1] 00:43:31.253 filename=/dev/nvme0n2 00:43:31.253 [job2] 00:43:31.253 filename=/dev/nvme0n3 00:43:31.253 [job3] 00:43:31.253 filename=/dev/nvme0n4 00:43:31.253 Could not set queue depth (nvme0n1) 00:43:31.253 Could not set queue depth (nvme0n2) 00:43:31.253 Could not set queue depth (nvme0n3) 00:43:31.253 Could not set queue depth (nvme0n4) 00:43:31.518 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:31.518 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:31.518 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:31.518 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:31.518 fio-3.35 00:43:31.518 Starting 4 threads 00:43:32.930 00:43:32.930 job0: (groupid=0, jobs=1): err= 0: pid=2877180: Sat Dec 7 11:55:31 2024 00:43:32.930 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:43:32.930 slat (nsec): min=964, max=9539.9k, avg=86766.06, stdev=620614.32 00:43:32.930 clat (usec): min=4043, max=35396, avg=10552.38, stdev=4179.62 00:43:32.930 lat (usec): min=4053, max=35399, avg=10639.15, stdev=4231.64 00:43:32.930 clat percentiles (usec): 00:43:32.930 | 1.00th=[ 4948], 5.00th=[ 6652], 10.00th=[ 7177], 20.00th=[ 7767], 00:43:32.930 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10421], 00:43:32.930 | 70.00th=[11076], 80.00th=[11600], 90.00th=[14746], 95.00th=[18482], 00:43:32.930 | 99.00th=[28967], 99.50th=[32113], 99.90th=[35390], 99.95th=[35390], 00:43:32.930 | 99.99th=[35390] 00:43:32.930 write: IOPS=5055, BW=19.7MiB/s (20.7MB/s)(19.9MiB/1006msec); 0 zone resets 00:43:32.930 slat (nsec): min=1690, max=8098.4k, avg=112535.14, stdev=551689.11 00:43:32.930 clat (usec): min=1249, max=41536, avg=15497.19, stdev=9213.10 00:43:32.930 lat (usec): min=1259, max=41540, avg=15609.73, stdev=9273.50 00:43:32.930 clat percentiles (usec): 00:43:32.930 | 1.00th=[ 5014], 5.00th=[ 6063], 10.00th=[ 6587], 20.00th=[ 7439], 00:43:32.930 | 30.00th=[ 7898], 40.00th=[ 8848], 50.00th=[12649], 60.00th=[15533], 00:43:32.930 | 70.00th=[20055], 80.00th=[24773], 90.00th=[29492], 95.00th=[33817], 00:43:32.930 | 99.00th=[39060], 99.50th=[39584], 99.90th=[41681], 99.95th=[41681], 00:43:32.931 | 99.99th=[41681] 00:43:32.931 bw ( KiB/s): min=18504, max=21168, per=23.24%, avg=19836.00, stdev=1883.73, samples=2 00:43:32.931 iops : min= 4626, max= 5292, avg=4959.00, stdev=470.93, samples=2 00:43:32.931 lat (msec) : 2=0.02%, 4=0.06%, 10=48.34%, 20=33.84%, 50=17.74% 00:43:32.931 cpu : usr=4.48%, sys=4.68%, ctx=471, majf=0, minf=1 00:43:32.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:32.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:32.931 issued rwts: total=4608,5086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:32.931 job1: (groupid=0, jobs=1): err= 0: pid=2877197: Sat Dec 7 11:55:31 2024 00:43:32.931 read: IOPS=6167, BW=24.1MiB/s (25.3MB/s)(24.2MiB/1003msec) 00:43:32.931 slat (nsec): min=942, max=15458k, avg=77806.81, stdev=429110.51 00:43:32.931 clat (usec): min=1012, max=27479, avg=10040.37, stdev=2857.21 00:43:32.931 lat (usec): min=2969, max=27508, avg=10118.18, stdev=2881.98 00:43:32.931 clat percentiles (usec): 00:43:32.931 | 1.00th=[ 5014], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7635], 00:43:32.931 | 30.00th=[ 7963], 40.00th=[ 8979], 50.00th=[10159], 60.00th=[10945], 00:43:32.931 | 70.00th=[11338], 80.00th=[11994], 90.00th=[12780], 95.00th=[13698], 00:43:32.931 | 99.00th=[22938], 99.50th=[23200], 99.90th=[23200], 99.95th=[23200], 00:43:32.931 | 99.99th=[27395] 00:43:32.931 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:43:32.931 slat (nsec): min=1571, max=12876k, avg=73294.87, stdev=459116.48 00:43:32.931 clat (usec): min=2598, max=43129, avg=9564.74, stdev=4342.70 00:43:32.931 lat (usec): min=2614, max=43164, avg=9638.03, stdev=4375.36 00:43:32.931 clat percentiles (usec): 00:43:32.931 | 1.00th=[ 4293], 5.00th=[ 5145], 10.00th=[ 6259], 20.00th=[ 7046], 00:43:32.931 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8717], 60.00th=[ 9634], 00:43:32.931 | 70.00th=[10421], 80.00th=[11076], 90.00th=[12387], 95.00th=[15270], 00:43:32.931 | 99.00th=[35390], 99.50th=[35390], 99.90th=[38536], 99.95th=[38536], 00:43:32.931 | 99.99th=[43254] 00:43:32.931 bw ( KiB/s): min=25144, max=27424, per=30.79%, avg=26284.00, stdev=1612.20, samples=2 00:43:32.931 iops : min= 6286, max= 6856, avg=6571.00, stdev=403.05, samples=2 00:43:32.931 lat (msec) : 2=0.01%, 4=0.46%, 10=56.91%, 20=40.66%, 50=1.95% 00:43:32.931 cpu : usr=3.99%, sys=6.39%, ctx=634, majf=0, minf=1 00:43:32.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:43:32.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:32.931 issued rwts: total=6186,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:32.931 job2: (groupid=0, jobs=1): err= 0: pid=2877223: Sat Dec 7 11:55:31 2024 00:43:32.931 read: IOPS=4499, BW=17.6MiB/s (18.4MB/s)(17.6MiB/1002msec) 00:43:32.931 slat (nsec): min=969, max=10146k, avg=104339.21, stdev=691064.22 00:43:32.931 clat (usec): min=794, max=41044, avg=13016.93, stdev=5052.72 00:43:32.931 lat (usec): min=3555, max=41075, avg=13121.27, stdev=5103.00 00:43:32.931 clat percentiles (usec): 00:43:32.931 | 1.00th=[ 5735], 5.00th=[ 6390], 10.00th=[ 7898], 20.00th=[ 8848], 00:43:32.931 | 30.00th=[10028], 40.00th=[11469], 50.00th=[12387], 60.00th=[13435], 00:43:32.931 | 70.00th=[14353], 80.00th=[15533], 90.00th=[18744], 95.00th=[23462], 00:43:32.931 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[33162], 00:43:32.931 | 99.99th=[41157] 00:43:32.931 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:43:32.931 slat (nsec): min=1645, max=8000.0k, avg=108989.90, stdev=607123.09 00:43:32.931 clat (usec): min=1535, max=74421, avg=14742.88, stdev=11586.19 00:43:32.931 lat (usec): min=1545, max=74428, avg=14851.87, stdev=11670.47 00:43:32.931 clat percentiles (usec): 00:43:32.931 | 1.00th=[ 4752], 5.00th=[ 5407], 10.00th=[ 7635], 20.00th=[ 8455], 00:43:32.931 | 30.00th=[10159], 40.00th=[10421], 50.00th=[11469], 60.00th=[12649], 00:43:32.931 | 70.00th=[13698], 80.00th=[15533], 90.00th=[25035], 95.00th=[35914], 00:43:32.931 | 99.00th=[69731], 99.50th=[71828], 99.90th=[73925], 99.95th=[73925], 00:43:32.931 | 99.99th=[73925] 00:43:32.931 bw ( KiB/s): min=20480, max=20480, per=23.99%, avg=20480.00, stdev= 0.00, samples=1 00:43:32.931 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:43:32.931 lat (usec) : 1000=0.01% 00:43:32.931 lat (msec) : 2=0.16%, 4=0.49%, 10=25.84%, 20=62.60%, 50=9.24% 00:43:32.931 lat (msec) : 100=1.65% 00:43:32.931 cpu : usr=2.40%, sys=6.39%, ctx=408, majf=0, minf=2 00:43:32.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:32.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:32.931 issued rwts: total=4508,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:32.931 job3: (groupid=0, jobs=1): err= 0: pid=2877231: Sat Dec 7 11:55:31 2024 00:43:32.931 read: IOPS=4979, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1003msec) 00:43:32.931 slat (nsec): min=957, max=14987k, avg=103204.54, stdev=711390.41 00:43:32.931 clat (usec): min=1863, max=48384, avg=13200.69, stdev=8086.21 00:43:32.931 lat (usec): min=4385, max=48398, avg=13303.89, stdev=8159.08 00:43:32.931 clat percentiles (usec): 00:43:32.931 | 1.00th=[ 5473], 5.00th=[ 6718], 10.00th=[ 7439], 20.00th=[ 8029], 00:43:32.931 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 9372], 60.00th=[10945], 00:43:32.931 | 70.00th=[13829], 80.00th=[17695], 90.00th=[26870], 95.00th=[28705], 00:43:32.931 | 99.00th=[41157], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:43:32.931 | 99.99th=[48497] 00:43:32.931 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:43:32.931 slat (nsec): min=1638, max=39182k, avg=88172.79, stdev=780026.58 00:43:32.931 clat (usec): min=3622, max=76074, avg=11875.77, stdev=8741.14 00:43:32.931 lat (usec): min=3630, max=76099, avg=11963.95, stdev=8810.32 00:43:32.931 clat percentiles (usec): 00:43:32.931 | 1.00th=[ 5014], 5.00th=[ 7111], 10.00th=[ 7701], 20.00th=[ 7963], 00:43:32.931 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9896], 00:43:32.931 | 70.00th=[11076], 80.00th=[13042], 90.00th=[16909], 95.00th=[30278], 00:43:32.931 | 99.00th=[51119], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:43:32.931 | 99.99th=[76022] 00:43:32.931 bw ( KiB/s): min=16384, max=24576, per=23.99%, avg=20480.00, stdev=5792.62, samples=2 00:43:32.931 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:43:32.931 lat (msec) : 2=0.01%, 4=0.11%, 10=58.09%, 20=28.96%, 50=11.92% 00:43:32.931 lat (msec) : 100=0.91% 00:43:32.931 cpu : usr=3.69%, sys=6.79%, ctx=299, majf=0, minf=1 00:43:32.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:32.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:32.931 issued rwts: total=4994,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:32.931 00:43:32.931 Run status group 0 (all jobs): 00:43:32.931 READ: bw=78.8MiB/s (82.6MB/s), 17.6MiB/s-24.1MiB/s (18.4MB/s-25.3MB/s), io=79.3MiB (83.1MB), run=1002-1006msec 00:43:32.931 WRITE: bw=83.4MiB/s (87.4MB/s), 18.0MiB/s-25.9MiB/s (18.8MB/s-27.2MB/s), io=83.9MiB (87.9MB), run=1002-1006msec 00:43:32.931 00:43:32.931 Disk stats (read/write): 00:43:32.931 nvme0n1: ios=3739/4096, merge=0/0, ticks=38450/64074, in_queue=102524, util=87.17% 00:43:32.931 nvme0n2: ios=5267/5632, merge=0/0, ticks=17877/17918, in_queue=35795, util=87.97% 00:43:32.931 nvme0n3: ios=4146/4263, merge=0/0, ticks=25412/22319, in_queue=47731, util=93.56% 00:43:32.931 nvme0n4: ios=4354/4608, merge=0/0, ticks=21869/17984, in_queue=39853, util=93.38% 00:43:32.931 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:32.931 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2877311 00:43:32.931 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:32.931 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:32.931 [global] 00:43:32.931 thread=1 00:43:32.931 invalidate=1 00:43:32.931 rw=read 00:43:32.931 time_based=1 00:43:32.931 runtime=10 00:43:32.931 ioengine=libaio 00:43:32.931 direct=1 00:43:32.931 bs=4096 00:43:32.931 iodepth=1 00:43:32.931 norandommap=1 00:43:32.931 numjobs=1 00:43:32.931 00:43:32.931 [job0] 00:43:32.931 filename=/dev/nvme0n1 00:43:32.931 [job1] 00:43:32.931 filename=/dev/nvme0n2 00:43:32.931 [job2] 00:43:32.931 filename=/dev/nvme0n3 00:43:32.931 [job3] 00:43:32.931 filename=/dev/nvme0n4 00:43:32.931 Could not set queue depth (nvme0n1) 00:43:32.931 Could not set queue depth (nvme0n2) 00:43:32.931 Could not set queue depth (nvme0n3) 00:43:32.931 Could not set queue depth (nvme0n4) 00:43:33.197 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:33.197 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:33.197 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:33.197 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:33.197 fio-3.35 00:43:33.197 Starting 4 threads 00:43:35.744 11:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:43:36.004 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2482176, buflen=4096 00:43:36.004 fio: pid=2877697, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:36.004 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:43:36.004 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=19714048, buflen=4096 00:43:36.004 fio: pid=2877687, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:36.004 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:36.004 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:43:36.264 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=4329472, buflen=4096 00:43:36.264 fio: pid=2877649, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:36.264 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:36.264 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:43:36.523 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=8654848, buflen=4096 00:43:36.523 fio: pid=2877660, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:36.523 00:43:36.523 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2877649: Sat Dec 7 11:55:35 2024 00:43:36.523 read: IOPS=362, BW=1448KiB/s (1483kB/s)(4228KiB/2920msec) 00:43:36.523 slat (usec): min=6, max=2748, avg=29.02, stdev=83.75 00:43:36.523 clat (usec): min=582, max=42067, avg=2706.80, stdev=7970.97 00:43:36.523 lat (usec): min=589, max=44172, avg=2735.83, stdev=7983.86 00:43:36.523 clat percentiles (usec): 00:43:36.523 | 1.00th=[ 898], 5.00th=[ 955], 10.00th=[ 996], 20.00th=[ 1029], 00:43:36.523 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1090], 00:43:36.523 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1221], 00:43:36.523 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:36.523 | 99.99th=[42206] 00:43:36.523 bw ( KiB/s): min= 104, max= 3616, per=15.25%, avg=1675.20, stdev=1778.51, samples=5 00:43:36.523 iops : min= 26, max= 904, avg=418.80, stdev=444.63, samples=5 00:43:36.523 lat (usec) : 750=0.19%, 1000=11.15% 00:43:36.523 lat (msec) : 2=84.50%, 20=0.09%, 50=3.97% 00:43:36.523 cpu : usr=0.79%, sys=1.27%, ctx=1059, majf=0, minf=1 00:43:36.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:36.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:36.523 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:36.523 issued rwts: total=1058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:36.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:36.523 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2877660: Sat Dec 7 11:55:35 2024 00:43:36.523 read: IOPS=676, BW=2703KiB/s (2768kB/s)(8452KiB/3127msec) 00:43:36.523 slat (usec): min=6, max=21799, avg=56.99, stdev=732.38 00:43:36.523 clat (usec): min=424, max=41919, avg=1406.87, stdev=4004.23 00:43:36.523 lat (usec): min=433, max=41930, avg=1463.88, stdev=4066.81 00:43:36.523 clat percentiles (usec): 00:43:36.523 | 1.00th=[ 652], 5.00th=[ 766], 10.00th=[ 848], 20.00th=[ 914], 00:43:36.523 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[ 1012], 60.00th=[ 1045], 00:43:36.523 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:43:36.523 | 99.00th=[12649], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:43:36.523 | 99.99th=[41681] 00:43:36.523 bw ( KiB/s): min= 624, max= 4016, per=24.33%, avg=2673.33, stdev=1586.09, samples=6 00:43:36.523 iops : min= 156, max= 1004, avg=668.33, stdev=396.52, samples=6 00:43:36.523 lat (usec) : 500=0.14%, 750=3.93%, 1000=42.10% 00:43:36.523 lat (msec) : 2=52.70%, 20=0.09%, 50=0.99% 00:43:36.523 cpu : usr=0.90%, sys=2.34%, ctx=2118, majf=0, minf=2 00:43:36.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:36.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:36.523 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:36.523 issued rwts: total=2114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:36.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:36.523 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2877687: Sat Dec 7 11:55:35 2024 00:43:36.523 read: IOPS=1767, BW=7070KiB/s (7240kB/s)(18.8MiB/2723msec) 00:43:36.523 slat (nsec): min=6686, max=81631, avg=22477.01, stdev=8373.44 00:43:36.523 clat (usec): min=174, max=2047, avg=533.61, stdev=70.77 00:43:36.523 lat (usec): min=182, max=2074, avg=556.09, stdev=71.85 00:43:36.523 clat percentiles (usec): 00:43:36.523 | 1.00th=[ 334], 5.00th=[ 424], 10.00th=[ 445], 20.00th=[ 478], 00:43:36.523 | 30.00th=[ 515], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 553], 00:43:36.523 | 70.00th=[ 570], 80.00th=[ 586], 90.00th=[ 603], 95.00th=[ 627], 00:43:36.523 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 734], 99.95th=[ 750], 00:43:36.523 | 99.99th=[ 2040] 00:43:36.523 bw ( KiB/s): min= 6792, max= 7328, per=64.84%, avg=7124.80, stdev=221.38, samples=5 00:43:36.523 iops : min= 1698, max= 1832, avg=1781.20, stdev=55.35, samples=5 00:43:36.523 lat (usec) : 250=0.31%, 500=26.13%, 750=73.49%, 1000=0.02% 00:43:36.523 lat (msec) : 4=0.02% 00:43:36.523 cpu : usr=1.62%, sys=4.70%, ctx=4815, majf=0, minf=2 00:43:36.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:36.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:36.523 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:36.523 issued rwts: total=4814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:36.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:36.523 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2877697: Sat Dec 7 11:55:35 2024 00:43:36.523 read: IOPS=238, BW=953KiB/s (976kB/s)(2424KiB/2543msec) 00:43:36.523 slat (nsec): min=7326, max=60036, avg=26760.41, stdev=3624.95 00:43:36.523 clat (usec): min=631, max=42097, avg=4127.59, stdev=10844.17 00:43:36.523 lat (usec): min=658, max=42123, avg=4154.35, stdev=10843.80 00:43:36.523 clat percentiles (usec): 00:43:36.523 | 1.00th=[ 758], 5.00th=[ 832], 10.00th=[ 865], 20.00th=[ 914], 00:43:36.523 | 30.00th=[ 963], 40.00th=[ 1004], 50.00th=[ 1045], 60.00th=[ 1090], 00:43:36.523 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1221], 95.00th=[41681], 00:43:36.523 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:36.523 | 99.99th=[42206] 00:43:36.523 bw ( KiB/s): min= 96, max= 3824, per=8.56%, avg=940.80, stdev=1626.00, samples=5 00:43:36.523 iops : min= 24, max= 956, avg=235.20, stdev=406.50, samples=5 00:43:36.523 lat (usec) : 750=0.99%, 1000=37.73% 00:43:36.523 lat (msec) : 2=53.54%, 50=7.58% 00:43:36.523 cpu : usr=0.35%, sys=0.98%, ctx=607, majf=0, minf=2 00:43:36.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:36.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:36.524 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:36.524 issued rwts: total=607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:36.524 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:36.524 00:43:36.524 Run status group 0 (all jobs): 00:43:36.524 READ: bw=10.7MiB/s (11.2MB/s), 953KiB/s-7070KiB/s (976kB/s-7240kB/s), io=33.6MiB (35.2MB), run=2543-3127msec 00:43:36.524 00:43:36.524 Disk stats (read/write): 00:43:36.524 nvme0n1: ios=1054/0, merge=0/0, ticks=2626/0, in_queue=2626, util=93.09% 00:43:36.524 nvme0n2: ios=2033/0, merge=0/0, ticks=2817/0, in_queue=2817, util=92.63% 00:43:36.524 nvme0n3: ios=4517/0, merge=0/0, ticks=2359/0, in_queue=2359, util=95.60% 00:43:36.524 nvme0n4: ios=607/0, merge=0/0, ticks=2464/0, in_queue=2464, util=96.05% 00:43:36.524 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:36.524 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:43:36.784 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:36.784 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:43:37.044 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:37.044 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:43:37.044 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:37.044 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:43:37.323 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:37.323 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:43:37.582 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:43:37.582 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2877311 00:43:37.582 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:43:37.582 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:38.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:38.152 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:38.152 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:43:38.412 nvmf hotplug test: fio failed as expected 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:43:38.412 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:38.413 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:38.413 rmmod nvme_tcp 00:43:38.672 rmmod nvme_fabrics 00:43:38.672 rmmod nvme_keyring 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2874125 ']' 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2874125 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2874125 ']' 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2874125 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2874125 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2874125' 00:43:38.673 killing process with pid 2874125 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2874125 00:43:38.673 11:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2874125 00:43:39.614 11:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:39.614 11:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:39.614 11:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:39.614 11:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:43:39.614 11:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:43:39.614 11:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:39.614 11:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:43:39.614 11:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:39.614 11:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:39.614 11:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:39.614 11:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:39.614 11:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:41.519 11:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:41.519 00:43:41.519 real 0m29.261s 00:43:41.519 user 2m20.148s 00:43:41.519 sys 0m12.321s 00:43:41.519 11:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:41.519 11:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:41.519 ************************************ 00:43:41.519 END TEST nvmf_fio_target 00:43:41.519 ************************************ 00:43:41.519 11:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:41.519 11:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:41.519 11:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:41.519 11:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:41.519 ************************************ 00:43:41.519 START TEST nvmf_bdevio 00:43:41.519 ************************************ 00:43:41.519 11:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:41.787 * Looking for test storage... 00:43:41.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:41.787 11:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:41.787 11:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:43:41.787 11:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:41.787 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:41.787 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:41.787 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:41.787 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:41.787 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:43:41.787 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:43:41.787 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:43:41.787 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:41.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:41.788 --rc genhtml_branch_coverage=1 00:43:41.788 --rc genhtml_function_coverage=1 00:43:41.788 --rc genhtml_legend=1 00:43:41.788 --rc geninfo_all_blocks=1 00:43:41.788 --rc geninfo_unexecuted_blocks=1 00:43:41.788 00:43:41.788 ' 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:41.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:41.788 --rc genhtml_branch_coverage=1 00:43:41.788 --rc genhtml_function_coverage=1 00:43:41.788 --rc genhtml_legend=1 00:43:41.788 --rc geninfo_all_blocks=1 00:43:41.788 --rc geninfo_unexecuted_blocks=1 00:43:41.788 00:43:41.788 ' 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:41.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:41.788 --rc genhtml_branch_coverage=1 00:43:41.788 --rc genhtml_function_coverage=1 00:43:41.788 --rc genhtml_legend=1 00:43:41.788 --rc geninfo_all_blocks=1 00:43:41.788 --rc geninfo_unexecuted_blocks=1 00:43:41.788 00:43:41.788 ' 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:41.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:41.788 --rc genhtml_branch_coverage=1 00:43:41.788 --rc genhtml_function_coverage=1 00:43:41.788 --rc genhtml_legend=1 00:43:41.788 --rc geninfo_all_blocks=1 00:43:41.788 --rc geninfo_unexecuted_blocks=1 00:43:41.788 00:43:41.788 ' 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:41.788 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:41.789 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:41.789 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:41.789 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:41.789 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:41.789 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:41.789 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:41.789 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:41.789 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:43:41.789 11:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:49.998 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:49.998 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:43:49.998 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:49.999 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:49.999 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:49.999 Found net devices under 0000:31:00.0: cvl_0_0 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:49.999 Found net devices under 0000:31:00.1: cvl_0_1 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:49.999 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:49.999 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:49.999 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:49.999 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:49.999 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:49.999 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:49.999 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:49.999 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:49.999 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:49.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:50.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:43:50.000 00:43:50.000 --- 10.0.0.2 ping statistics --- 00:43:50.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:50.000 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:50.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:50.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:43:50.000 00:43:50.000 --- 10.0.0.1 ping statistics --- 00:43:50.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:50.000 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2882903 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2882903 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2882903 ']' 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:50.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:50.000 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:50.000 [2024-12-07 11:55:48.373569] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:50.000 [2024-12-07 11:55:48.375928] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:43:50.000 [2024-12-07 11:55:48.376006] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:50.000 [2024-12-07 11:55:48.526133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:50.000 [2024-12-07 11:55:48.626741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:50.000 [2024-12-07 11:55:48.626781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:50.000 [2024-12-07 11:55:48.626794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:50.000 [2024-12-07 11:55:48.626804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:50.000 [2024-12-07 11:55:48.626815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:50.000 [2024-12-07 11:55:48.629279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:43:50.000 [2024-12-07 11:55:48.629400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:43:50.000 [2024-12-07 11:55:48.629490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:50.000 [2024-12-07 11:55:48.629516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:43:50.000 [2024-12-07 11:55:48.887196] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:50.000 [2024-12-07 11:55:48.888625] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:50.000 [2024-12-07 11:55:48.889665] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:50.000 [2024-12-07 11:55:48.889911] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:50.000 [2024-12-07 11:55:48.890055] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:50.000 [2024-12-07 11:55:49.178691] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:50.000 Malloc0 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:50.000 [2024-12-07 11:55:49.314528] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:50.000 { 00:43:50.000 "params": { 00:43:50.000 "name": "Nvme$subsystem", 00:43:50.000 "trtype": "$TEST_TRANSPORT", 00:43:50.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:50.000 "adrfam": "ipv4", 00:43:50.000 "trsvcid": "$NVMF_PORT", 00:43:50.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:50.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:50.000 "hdgst": ${hdgst:-false}, 00:43:50.000 "ddgst": ${ddgst:-false} 00:43:50.000 }, 00:43:50.000 "method": "bdev_nvme_attach_controller" 00:43:50.000 } 00:43:50.000 EOF 00:43:50.000 )") 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:43:50.000 11:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:50.000 "params": { 00:43:50.000 "name": "Nvme1", 00:43:50.000 "trtype": "tcp", 00:43:50.000 "traddr": "10.0.0.2", 00:43:50.000 "adrfam": "ipv4", 00:43:50.000 "trsvcid": "4420", 00:43:50.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:50.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:50.000 "hdgst": false, 00:43:50.000 "ddgst": false 00:43:50.001 }, 00:43:50.001 "method": "bdev_nvme_attach_controller" 00:43:50.001 }' 00:43:50.261 [2024-12-07 11:55:49.408829] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:43:50.261 [2024-12-07 11:55:49.408951] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883253 ] 00:43:50.261 [2024-12-07 11:55:49.552299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:50.520 [2024-12-07 11:55:49.653889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:50.520 [2024-12-07 11:55:49.653970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:50.520 [2024-12-07 11:55:49.653971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:50.793 I/O targets: 00:43:50.793 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:43:50.793 00:43:50.793 00:43:50.793 CUnit - A unit testing framework for C - Version 2.1-3 00:43:50.793 http://cunit.sourceforge.net/ 00:43:50.793 00:43:50.793 00:43:50.793 Suite: bdevio tests on: Nvme1n1 00:43:51.055 Test: blockdev write read block ...passed 00:43:51.055 Test: blockdev write zeroes read block ...passed 00:43:51.055 Test: blockdev write zeroes read no split ...passed 00:43:51.055 Test: blockdev write zeroes read split ...passed 00:43:51.055 Test: blockdev write zeroes read split partial ...passed 00:43:51.055 Test: blockdev reset ...[2024-12-07 11:55:50.361885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:43:51.055 [2024-12-07 11:55:50.361998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039ec00 (9): Bad file descriptor 00:43:51.055 [2024-12-07 11:55:50.374834] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:43:51.055 passed 00:43:51.055 Test: blockdev write read 8 blocks ...passed 00:43:51.055 Test: blockdev write read size > 128k ...passed 00:43:51.055 Test: blockdev write read invalid size ...passed 00:43:51.315 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:51.315 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:51.315 Test: blockdev write read max offset ...passed 00:43:51.315 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:51.315 Test: blockdev writev readv 8 blocks ...passed 00:43:51.315 Test: blockdev writev readv 30 x 1block ...passed 00:43:51.315 Test: blockdev writev readv block ...passed 00:43:51.315 Test: blockdev writev readv size > 128k ...passed 00:43:51.315 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:51.315 Test: blockdev comparev and writev ...[2024-12-07 11:55:50.639518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:51.315 [2024-12-07 11:55:50.639550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:51.315 [2024-12-07 11:55:50.639566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:51.315 [2024-12-07 11:55:50.639576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:51.315 [2024-12-07 11:55:50.640054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:51.315 [2024-12-07 11:55:50.640069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:51.315 [2024-12-07 11:55:50.640082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:51.315 [2024-12-07 11:55:50.640090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:51.315 [2024-12-07 11:55:50.640587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:51.315 [2024-12-07 11:55:50.640600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:51.315 [2024-12-07 11:55:50.640616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:51.315 [2024-12-07 11:55:50.640624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:51.315 [2024-12-07 11:55:50.641083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:51.315 [2024-12-07 11:55:50.641096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:51.315 [2024-12-07 11:55:50.641113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:51.315 [2024-12-07 11:55:50.641121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:51.576 passed 00:43:51.576 Test: blockdev nvme passthru rw ...passed 00:43:51.576 Test: blockdev nvme passthru vendor specific ...[2024-12-07 11:55:50.725666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:51.576 [2024-12-07 11:55:50.725692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:51.576 [2024-12-07 11:55:50.725958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:51.576 [2024-12-07 11:55:50.725968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:51.576 [2024-12-07 11:55:50.726197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:51.576 [2024-12-07 11:55:50.726211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:51.576 [2024-12-07 11:55:50.726436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:51.576 [2024-12-07 11:55:50.726447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:51.576 passed 00:43:51.576 Test: blockdev nvme admin passthru ...passed 00:43:51.576 Test: blockdev copy ...passed 00:43:51.576 00:43:51.576 Run Summary: Type Total Ran Passed Failed Inactive 00:43:51.576 suites 1 1 n/a 0 0 00:43:51.576 tests 23 23 23 0 0 00:43:51.576 asserts 152 152 152 0 n/a 00:43:51.576 00:43:51.576 Elapsed time = 1.355 seconds 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:52.147 rmmod nvme_tcp 00:43:52.147 rmmod nvme_fabrics 00:43:52.147 rmmod nvme_keyring 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2882903 ']' 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2882903 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2882903 ']' 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2882903 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:52.147 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2882903 00:43:52.407 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:43:52.407 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:43:52.407 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2882903' 00:43:52.407 killing process with pid 2882903 00:43:52.407 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2882903 00:43:52.407 11:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2882903 00:43:53.349 11:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:53.349 11:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:53.349 11:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:53.349 11:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:43:53.349 11:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:43:53.349 11:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:53.349 11:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:43:53.349 11:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:53.349 11:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:53.349 11:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:53.349 11:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:53.349 11:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:55.899 11:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:55.899 00:43:55.899 real 0m13.767s 00:43:55.899 user 0m16.362s 00:43:55.899 sys 0m6.759s 00:43:55.899 11:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:55.899 11:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:55.899 ************************************ 00:43:55.899 END TEST nvmf_bdevio 00:43:55.899 ************************************ 00:43:55.899 11:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:43:55.899 00:43:55.899 real 5m11.303s 00:43:55.899 user 10m50.387s 00:43:55.899 sys 2m4.954s 00:43:55.899 11:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:55.899 11:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:55.899 ************************************ 00:43:55.899 END TEST nvmf_target_core_interrupt_mode 00:43:55.899 ************************************ 00:43:55.899 11:55:54 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:55.899 11:55:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:55.899 11:55:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:55.899 11:55:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:55.899 ************************************ 00:43:55.899 START TEST nvmf_interrupt 00:43:55.899 ************************************ 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:55.899 * Looking for test storage... 00:43:55.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:55.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:55.899 --rc genhtml_branch_coverage=1 00:43:55.899 --rc genhtml_function_coverage=1 00:43:55.899 --rc genhtml_legend=1 00:43:55.899 --rc geninfo_all_blocks=1 00:43:55.899 --rc geninfo_unexecuted_blocks=1 00:43:55.899 00:43:55.899 ' 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:55.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:55.899 --rc genhtml_branch_coverage=1 00:43:55.899 --rc genhtml_function_coverage=1 00:43:55.899 --rc genhtml_legend=1 00:43:55.899 --rc geninfo_all_blocks=1 00:43:55.899 --rc geninfo_unexecuted_blocks=1 00:43:55.899 00:43:55.899 ' 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:55.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:55.899 --rc genhtml_branch_coverage=1 00:43:55.899 --rc genhtml_function_coverage=1 00:43:55.899 --rc genhtml_legend=1 00:43:55.899 --rc geninfo_all_blocks=1 00:43:55.899 --rc geninfo_unexecuted_blocks=1 00:43:55.899 00:43:55.899 ' 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:55.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:55.899 --rc genhtml_branch_coverage=1 00:43:55.899 --rc genhtml_function_coverage=1 00:43:55.899 --rc genhtml_legend=1 00:43:55.899 --rc geninfo_all_blocks=1 00:43:55.899 --rc geninfo_unexecuted_blocks=1 00:43:55.899 00:43:55.899 ' 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:55.899 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:43:55.900 11:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:02.479 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:02.480 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:02.480 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:02.480 Found net devices under 0000:31:00.0: cvl_0_0 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:02.480 Found net devices under 0000:31:00.1: cvl_0_1 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:02.480 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:02.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:02.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:44:02.741 00:44:02.741 --- 10.0.0.2 ping statistics --- 00:44:02.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:02.741 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:02.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:02.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:44:02.741 00:44:02.741 --- 10.0.0.1 ping statistics --- 00:44:02.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:02.741 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:02.741 11:56:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:02.741 11:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:44:02.741 11:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:02.741 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:02.741 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:02.741 11:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2887930 00:44:02.741 11:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2887930 00:44:02.741 11:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:44:02.741 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2887930 ']' 00:44:02.741 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:02.741 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:02.741 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:02.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:02.741 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:02.741 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:03.003 [2024-12-07 11:56:02.124077] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:03.003 [2024-12-07 11:56:02.126746] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:44:03.003 [2024-12-07 11:56:02.126845] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:03.003 [2024-12-07 11:56:02.270530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:03.265 [2024-12-07 11:56:02.367455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:03.265 [2024-12-07 11:56:02.367496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:03.265 [2024-12-07 11:56:02.367511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:03.265 [2024-12-07 11:56:02.367521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:03.265 [2024-12-07 11:56:02.367533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:03.265 [2024-12-07 11:56:02.369350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:03.265 [2024-12-07 11:56:02.369390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:03.265 [2024-12-07 11:56:02.612539] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:03.265 [2024-12-07 11:56:02.612623] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:03.265 [2024-12-07 11:56:02.612818] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:03.526 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:03.526 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:44:03.526 11:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:03.526 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:03.526 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:44:03.787 5000+0 records in 00:44:03.787 5000+0 records out 00:44:03.787 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0156941 s, 652 MB/s 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:03.787 AIO0 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:03.787 [2024-12-07 11:56:02.970111] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.787 11:56:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:03.787 [2024-12-07 11:56:03.014517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2887930 0 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2887930 0 idle 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2887930 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2887930 -w 256 00:44:03.787 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2887930 root 20 0 20.1t 207360 99072 S 0.0 0.2 0:00.60 reactor_0' 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2887930 root 20 0 20.1t 207360 99072 S 0.0 0.2 0:00.60 reactor_0 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2887930 1 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2887930 1 idle 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2887930 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2887930 -w 256 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2888002 root 20 0 20.1t 207360 99072 S 0.0 0.2 0:00.00 reactor_1' 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2888002 root 20 0 20.1t 207360 99072 S 0.0 0.2 0:00.00 reactor_1 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2888098 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2887930 0 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2887930 0 busy 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2887930 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2887930 -w 256 00:44:04.048 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:04.307 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2887930 root 20 0 20.1t 208512 99072 S 6.7 0.2 0:00.61 reactor_0' 00:44:04.307 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2887930 root 20 0 20.1t 208512 99072 S 6.7 0.2 0:00.61 reactor_0 00:44:04.307 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:04.307 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:04.307 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:44:04.307 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:44:04.307 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:44:04.307 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:44:04.307 11:56:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:44:05.247 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:44:05.247 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:05.247 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2887930 -w 256 00:44:05.247 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:05.506 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2887930 root 20 0 20.1t 221184 99072 R 99.9 0.2 0:02.84 reactor_0' 00:44:05.506 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2887930 root 20 0 20.1t 221184 99072 R 99.9 0.2 0:02.84 reactor_0 00:44:05.506 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:05.506 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:05.506 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:44:05.506 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:44:05.506 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2887930 1 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2887930 1 busy 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2887930 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2887930 -w 256 00:44:05.507 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:05.767 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2888002 root 20 0 20.1t 221184 99072 R 99.9 0.2 0:01.30 reactor_1' 00:44:05.767 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2888002 root 20 0 20.1t 221184 99072 R 99.9 0.2 0:01.30 reactor_1 00:44:05.767 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:05.767 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:05.767 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:44:05.767 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:44:05.767 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:44:05.767 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:44:05.767 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:44:05.767 11:56:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:05.767 11:56:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2888098 00:44:15.763 Initializing NVMe Controllers 00:44:15.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:15.763 Controller IO queue size 256, less than required. 00:44:15.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:44:15.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:44:15.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:44:15.763 Initialization complete. Launching workers. 00:44:15.763 ======================================================== 00:44:15.764 Latency(us) 00:44:15.764 Device Information : IOPS MiB/s Average min max 00:44:15.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18535.36 72.40 13816.86 4276.21 33834.94 00:44:15.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 15607.78 60.97 16408.64 6770.44 19970.06 00:44:15.764 ======================================================== 00:44:15.764 Total : 34143.14 133.37 15001.63 4276.21 33834.94 00:44:15.764 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2887930 0 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2887930 0 idle 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2887930 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2887930 -w 256 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2887930 root 20 0 20.1t 221184 99072 S 6.7 0.2 0:20.60 reactor_0' 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2887930 root 20 0 20.1t 221184 99072 S 6.7 0.2 0:20.60 reactor_0 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2887930 1 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2887930 1 idle 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2887930 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:15.764 11:56:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2887930 -w 256 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2888002 root 20 0 20.1t 221184 99072 S 0.0 0.2 0:10.01 reactor_1' 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2888002 root 20 0 20.1t 221184 99072 S 0.0 0.2 0:10.01 reactor_1 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:44:15.764 11:56:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2887930 0 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2887930 0 idle 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2887930 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:17.674 11:56:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2887930 -w 256 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2887930 root 20 0 20.1t 294912 125568 S 0.0 0.2 0:21.13 reactor_0' 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2887930 root 20 0 20.1t 294912 125568 S 0.0 0.2 0:21.13 reactor_0 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2887930 1 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2887930 1 idle 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2887930 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2887930 -w 256 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:17.935 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2888002 root 20 0 20.1t 294912 125568 S 0.0 0.2 0:10.34 reactor_1' 00:44:17.936 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2888002 root 20 0 20.1t 294912 125568 S 0.0 0.2 0:10.34 reactor_1 00:44:17.936 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:17.936 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:17.936 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:17.936 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:17.936 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:17.936 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:17.936 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:17.936 11:56:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:17.936 11:56:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:18.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:44:18.507 11:56:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:18.508 rmmod nvme_tcp 00:44:18.508 rmmod nvme_fabrics 00:44:18.508 rmmod nvme_keyring 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2887930 ']' 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2887930 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2887930 ']' 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2887930 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2887930 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2887930' 00:44:18.508 killing process with pid 2887930 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2887930 00:44:18.508 11:56:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2887930 00:44:19.449 11:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:19.449 11:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:19.449 11:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:19.449 11:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:44:19.449 11:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:44:19.449 11:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:19.449 11:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:44:19.449 11:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:19.449 11:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:19.449 11:56:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:19.449 11:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:19.449 11:56:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:21.991 11:56:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:21.991 00:44:21.991 real 0m26.005s 00:44:21.991 user 0m42.218s 00:44:21.991 sys 0m9.232s 00:44:21.991 11:56:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:21.991 11:56:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:21.991 ************************************ 00:44:21.991 END TEST nvmf_interrupt 00:44:21.991 ************************************ 00:44:21.991 00:44:21.991 real 38m34.369s 00:44:21.991 user 92m37.468s 00:44:21.991 sys 11m5.454s 00:44:21.991 11:56:20 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:21.991 11:56:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:21.991 ************************************ 00:44:21.991 END TEST nvmf_tcp 00:44:21.991 ************************************ 00:44:21.991 11:56:20 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:44:21.991 11:56:20 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:21.991 11:56:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:21.991 11:56:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:21.991 11:56:20 -- common/autotest_common.sh@10 -- # set +x 00:44:21.991 ************************************ 00:44:21.991 START TEST spdkcli_nvmf_tcp 00:44:21.991 ************************************ 00:44:21.991 11:56:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:21.991 * Looking for test storage... 00:44:21.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:44:21.991 11:56:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:21.991 11:56:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:44:21.991 11:56:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:21.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:21.991 --rc genhtml_branch_coverage=1 00:44:21.991 --rc genhtml_function_coverage=1 00:44:21.991 --rc genhtml_legend=1 00:44:21.991 --rc geninfo_all_blocks=1 00:44:21.991 --rc geninfo_unexecuted_blocks=1 00:44:21.991 00:44:21.991 ' 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:21.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:21.991 --rc genhtml_branch_coverage=1 00:44:21.991 --rc genhtml_function_coverage=1 00:44:21.991 --rc genhtml_legend=1 00:44:21.991 --rc geninfo_all_blocks=1 00:44:21.991 --rc geninfo_unexecuted_blocks=1 00:44:21.991 00:44:21.991 ' 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:21.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:21.991 --rc genhtml_branch_coverage=1 00:44:21.991 --rc genhtml_function_coverage=1 00:44:21.991 --rc genhtml_legend=1 00:44:21.991 --rc geninfo_all_blocks=1 00:44:21.991 --rc geninfo_unexecuted_blocks=1 00:44:21.991 00:44:21.991 ' 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:21.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:21.991 --rc genhtml_branch_coverage=1 00:44:21.991 --rc genhtml_function_coverage=1 00:44:21.991 --rc genhtml_legend=1 00:44:21.991 --rc geninfo_all_blocks=1 00:44:21.991 --rc geninfo_unexecuted_blocks=1 00:44:21.991 00:44:21.991 ' 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:21.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:44:21.991 11:56:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2891562 00:44:21.992 11:56:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2891562 00:44:21.992 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2891562 ']' 00:44:21.992 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:21.992 11:56:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:44:21.992 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:21.992 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:21.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:21.992 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:21.992 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:21.992 [2024-12-07 11:56:21.184158] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:44:21.992 [2024-12-07 11:56:21.184269] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891562 ] 00:44:21.992 [2024-12-07 11:56:21.312778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:22.252 [2024-12-07 11:56:21.411231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:22.252 [2024-12-07 11:56:21.411253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:22.823 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:22.823 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:44:22.823 11:56:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:44:22.823 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:22.823 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:22.823 11:56:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:44:22.823 11:56:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:44:22.823 11:56:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:44:22.823 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:22.823 11:56:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:22.823 11:56:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:44:22.823 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:44:22.823 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:44:22.823 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:44:22.823 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:44:22.823 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:44:22.823 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:44:22.823 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:22.823 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:22.823 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:44:22.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:44:22.823 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:44:22.823 ' 00:44:25.369 [2024-12-07 11:56:24.526520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:26.750 [2024-12-07 11:56:25.734792] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:28.659 [2024-12-07 11:56:27.953283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:30.581 [2024-12-07 11:56:29.858923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:32.495 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:32.495 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:32.495 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:32.495 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:32.495 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:32.495 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:32.495 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:32.495 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:32.495 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:32.495 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:32.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:32.495 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:32.495 11:56:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:32.495 11:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:32.495 11:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:32.495 11:56:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:32.495 11:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:32.495 11:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:32.495 11:56:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:32.495 11:56:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:44:32.495 11:56:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:32.755 11:56:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:32.755 11:56:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:32.755 11:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:32.755 11:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:32.755 11:56:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:32.755 11:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:32.755 11:56:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:32.756 11:56:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:32.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:32.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:32.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:32.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:32.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:32.756 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:32.756 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:32.756 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:32.756 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:32.756 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:32.756 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:32.756 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:32.756 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:32.756 ' 00:44:38.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:44:38.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:44:38.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:38.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:44:38.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:44:38.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:44:38.142 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:44:38.142 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:38.142 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:44:38.142 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:44:38.142 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:44:38.142 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:44:38.142 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:44:38.142 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:44:38.142 11:56:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:44:38.142 11:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:38.142 11:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:38.143 11:56:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2891562 00:44:38.143 11:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2891562 ']' 00:44:38.143 11:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2891562 00:44:38.143 11:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:44:38.143 11:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:38.143 11:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2891562 00:44:38.143 11:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:38.143 11:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:38.143 11:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2891562' 00:44:38.143 killing process with pid 2891562 00:44:38.143 11:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2891562 00:44:38.143 11:56:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2891562 00:44:39.082 11:56:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:44:39.082 11:56:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:44:39.082 11:56:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2891562 ']' 00:44:39.082 11:56:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2891562 00:44:39.082 11:56:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2891562 ']' 00:44:39.082 11:56:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2891562 00:44:39.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2891562) - No such process 00:44:39.082 11:56:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2891562 is not found' 00:44:39.082 Process with pid 2891562 is not found 00:44:39.082 11:56:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:39.082 11:56:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:39.082 11:56:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:39.082 00:44:39.082 real 0m17.353s 00:44:39.082 user 0m35.388s 00:44:39.082 sys 0m0.858s 00:44:39.082 11:56:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:39.082 11:56:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:39.082 ************************************ 00:44:39.082 END TEST spdkcli_nvmf_tcp 00:44:39.082 ************************************ 00:44:39.082 11:56:38 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:39.082 11:56:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:39.082 11:56:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:39.082 11:56:38 -- common/autotest_common.sh@10 -- # set +x 00:44:39.082 ************************************ 00:44:39.082 START TEST nvmf_identify_passthru 00:44:39.082 ************************************ 00:44:39.082 11:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:39.082 * Looking for test storage... 00:44:39.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:39.082 11:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:39.082 11:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:44:39.082 11:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:39.344 11:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:39.344 11:56:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:44:39.344 11:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:39.344 11:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:39.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:39.344 --rc genhtml_branch_coverage=1 00:44:39.344 --rc genhtml_function_coverage=1 00:44:39.344 --rc genhtml_legend=1 00:44:39.344 --rc geninfo_all_blocks=1 00:44:39.344 --rc geninfo_unexecuted_blocks=1 00:44:39.344 00:44:39.344 ' 00:44:39.344 11:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:39.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:39.344 --rc genhtml_branch_coverage=1 00:44:39.344 --rc genhtml_function_coverage=1 00:44:39.344 --rc genhtml_legend=1 00:44:39.344 --rc geninfo_all_blocks=1 00:44:39.344 --rc geninfo_unexecuted_blocks=1 00:44:39.344 00:44:39.344 ' 00:44:39.344 11:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:39.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:39.344 --rc genhtml_branch_coverage=1 00:44:39.344 --rc genhtml_function_coverage=1 00:44:39.344 --rc genhtml_legend=1 00:44:39.344 --rc geninfo_all_blocks=1 00:44:39.344 --rc geninfo_unexecuted_blocks=1 00:44:39.344 00:44:39.344 ' 00:44:39.344 11:56:38 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:39.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:39.344 --rc genhtml_branch_coverage=1 00:44:39.344 --rc genhtml_function_coverage=1 00:44:39.344 --rc genhtml_legend=1 00:44:39.344 --rc geninfo_all_blocks=1 00:44:39.344 --rc geninfo_unexecuted_blocks=1 00:44:39.344 00:44:39.344 ' 00:44:39.344 11:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:39.344 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:44:39.344 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:39.344 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:39.344 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:39.344 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:39.345 11:56:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:39.345 11:56:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:39.345 11:56:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:39.345 11:56:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:39.345 11:56:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.345 11:56:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.345 11:56:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.345 11:56:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:39.345 11:56:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:39.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:39.345 11:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:39.345 11:56:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:39.345 11:56:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:39.345 11:56:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:39.345 11:56:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:39.345 11:56:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.345 11:56:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.345 11:56:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.345 11:56:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:39.345 11:56:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.345 11:56:38 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:39.345 11:56:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:39.345 11:56:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:39.345 11:56:38 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:44:39.345 11:56:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:44:45.929 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:45.930 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:45.930 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:45.930 Found net devices under 0000:31:00.0: cvl_0_0 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:45.930 Found net devices under 0000:31:00.1: cvl_0_1 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:45.930 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:45.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:45.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:44:45.930 00:44:45.930 --- 10.0.0.2 ping statistics --- 00:44:45.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:45.930 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:45.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:45.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:44:45.930 00:44:45.930 --- 10.0.0.1 ping statistics --- 00:44:45.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:45.930 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:45.930 11:56:45 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:45.930 11:56:45 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:45.930 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:45.930 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:45.930 11:56:45 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:45.930 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:44:45.930 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:44:45.930 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:44:45.930 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:44:45.930 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:44:45.931 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:44:45.931 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:45.931 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:44:45.931 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:44:45.931 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:44:45.931 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:44:45.931 11:56:45 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:44:45.931 11:56:45 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:44:45.931 11:56:45 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:44:45.931 11:56:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:45.931 11:56:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:45.931 11:56:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:46.870 11:56:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:44:46.870 11:56:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:46.870 11:56:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:46.870 11:56:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:47.440 11:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:44:47.440 11:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:44:47.440 11:56:46 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:47.440 11:56:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:47.440 11:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:44:47.440 11:56:46 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:47.440 11:56:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:47.440 11:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2898719 00:44:47.440 11:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:47.440 11:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2898719 00:44:47.440 11:56:46 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2898719 ']' 00:44:47.440 11:56:46 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:47.440 11:56:46 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:47.440 11:56:46 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:47.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:47.440 11:56:46 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:47.440 11:56:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:47.440 11:56:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:47.440 [2024-12-07 11:56:46.614342] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:44:47.440 [2024-12-07 11:56:46.614451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:47.440 [2024-12-07 11:56:46.748069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:47.701 [2024-12-07 11:56:46.847807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:47.701 [2024-12-07 11:56:46.847852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:47.701 [2024-12-07 11:56:46.847864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:47.701 [2024-12-07 11:56:46.847875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:47.701 [2024-12-07 11:56:46.847884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:47.701 [2024-12-07 11:56:46.850173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:47.701 [2024-12-07 11:56:46.850417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:47.701 [2024-12-07 11:56:46.850560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:47.701 [2024-12-07 11:56:46.850579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:48.272 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:48.272 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:44:48.272 11:56:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:44:48.272 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.272 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:48.272 INFO: Log level set to 20 00:44:48.272 INFO: Requests: 00:44:48.272 { 00:44:48.272 "jsonrpc": "2.0", 00:44:48.272 "method": "nvmf_set_config", 00:44:48.272 "id": 1, 00:44:48.272 "params": { 00:44:48.272 "admin_cmd_passthru": { 00:44:48.272 "identify_ctrlr": true 00:44:48.272 } 00:44:48.272 } 00:44:48.272 } 00:44:48.272 00:44:48.272 INFO: response: 00:44:48.272 { 00:44:48.272 "jsonrpc": "2.0", 00:44:48.272 "id": 1, 00:44:48.272 "result": true 00:44:48.272 } 00:44:48.272 00:44:48.272 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.272 11:56:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:44:48.272 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.272 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:48.272 INFO: Setting log level to 20 00:44:48.272 INFO: Setting log level to 20 00:44:48.272 INFO: Log level set to 20 00:44:48.272 INFO: Log level set to 20 00:44:48.272 INFO: Requests: 00:44:48.272 { 00:44:48.272 "jsonrpc": "2.0", 00:44:48.272 "method": "framework_start_init", 00:44:48.272 "id": 1 00:44:48.272 } 00:44:48.272 00:44:48.272 INFO: Requests: 00:44:48.272 { 00:44:48.272 "jsonrpc": "2.0", 00:44:48.272 "method": "framework_start_init", 00:44:48.272 "id": 1 00:44:48.272 } 00:44:48.272 00:44:48.532 [2024-12-07 11:56:47.637580] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:44:48.532 INFO: response: 00:44:48.532 { 00:44:48.532 "jsonrpc": "2.0", 00:44:48.532 "id": 1, 00:44:48.532 "result": true 00:44:48.532 } 00:44:48.532 00:44:48.532 INFO: response: 00:44:48.532 { 00:44:48.532 "jsonrpc": "2.0", 00:44:48.532 "id": 1, 00:44:48.532 "result": true 00:44:48.532 } 00:44:48.532 00:44:48.532 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.532 11:56:47 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:48.532 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.532 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:48.532 INFO: Setting log level to 40 00:44:48.532 INFO: Setting log level to 40 00:44:48.532 INFO: Setting log level to 40 00:44:48.532 [2024-12-07 11:56:47.653132] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:48.532 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.532 11:56:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:44:48.533 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:48.533 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:48.533 11:56:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:44:48.533 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.533 11:56:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:48.793 Nvme0n1 00:44:48.793 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.793 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:44:48.793 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.793 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:48.793 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.793 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:44:48.793 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.793 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:48.793 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.793 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:48.793 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.793 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:48.793 [2024-12-07 11:56:48.086430] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:48.793 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.794 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:44:48.794 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.794 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:48.794 [ 00:44:48.794 { 00:44:48.794 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:44:48.794 "subtype": "Discovery", 00:44:48.794 "listen_addresses": [], 00:44:48.794 "allow_any_host": true, 00:44:48.794 "hosts": [] 00:44:48.794 }, 00:44:48.794 { 00:44:48.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:48.794 "subtype": "NVMe", 00:44:48.794 "listen_addresses": [ 00:44:48.794 { 00:44:48.794 "trtype": "TCP", 00:44:48.794 "adrfam": "IPv4", 00:44:48.794 "traddr": "10.0.0.2", 00:44:48.794 "trsvcid": "4420" 00:44:48.794 } 00:44:48.794 ], 00:44:48.794 "allow_any_host": true, 00:44:48.794 "hosts": [], 00:44:48.794 "serial_number": "SPDK00000000000001", 00:44:48.794 "model_number": "SPDK bdev Controller", 00:44:48.794 "max_namespaces": 1, 00:44:48.794 "min_cntlid": 1, 00:44:48.794 "max_cntlid": 65519, 00:44:48.794 "namespaces": [ 00:44:48.794 { 00:44:48.794 "nsid": 1, 00:44:48.794 "bdev_name": "Nvme0n1", 00:44:48.794 "name": "Nvme0n1", 00:44:48.794 "nguid": "3634473052605494002538450000002D", 00:44:48.794 "uuid": "36344730-5260-5494-0025-38450000002d" 00:44:48.794 } 00:44:48.794 ] 00:44:48.794 } 00:44:48.794 ] 00:44:48.794 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.794 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:48.794 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:44:48.794 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:44:49.054 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:44:49.054 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:49.054 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:44:49.054 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:44:49.635 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:44:49.635 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:44:49.635 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:44:49.635 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:49.635 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.635 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:49.635 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.635 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:44:49.635 11:56:48 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:44:49.635 11:56:48 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:49.635 11:56:48 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:44:49.635 11:56:48 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:49.635 11:56:48 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:44:49.635 11:56:48 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:49.635 11:56:48 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:49.635 rmmod nvme_tcp 00:44:49.635 rmmod nvme_fabrics 00:44:49.635 rmmod nvme_keyring 00:44:49.635 11:56:48 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:49.635 11:56:48 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:44:49.635 11:56:48 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:44:49.635 11:56:48 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2898719 ']' 00:44:49.635 11:56:48 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2898719 00:44:49.635 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2898719 ']' 00:44:49.635 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2898719 00:44:49.635 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:44:49.635 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:49.635 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2898719 00:44:49.635 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:49.635 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:49.635 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2898719' 00:44:49.635 killing process with pid 2898719 00:44:49.635 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2898719 00:44:49.635 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2898719 00:44:50.577 11:56:49 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:50.577 11:56:49 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:50.577 11:56:49 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:50.577 11:56:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:44:50.577 11:56:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:44:50.577 11:56:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:50.577 11:56:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:44:50.577 11:56:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:50.577 11:56:49 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:50.577 11:56:49 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:50.577 11:56:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:50.577 11:56:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:53.122 11:56:51 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:53.122 00:44:53.122 real 0m13.575s 00:44:53.122 user 0m12.986s 00:44:53.122 sys 0m6.348s 00:44:53.122 11:56:51 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:53.122 11:56:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:53.122 ************************************ 00:44:53.122 END TEST nvmf_identify_passthru 00:44:53.122 ************************************ 00:44:53.122 11:56:51 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:53.122 11:56:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:53.122 11:56:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:53.123 11:56:51 -- common/autotest_common.sh@10 -- # set +x 00:44:53.123 ************************************ 00:44:53.123 START TEST nvmf_dif 00:44:53.123 ************************************ 00:44:53.123 11:56:51 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:53.123 * Looking for test storage... 00:44:53.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:53.123 11:56:52 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:53.123 11:56:52 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:44:53.123 11:56:52 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:53.123 11:56:52 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:44:53.123 11:56:52 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:53.123 11:56:52 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:53.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:53.123 --rc genhtml_branch_coverage=1 00:44:53.123 --rc genhtml_function_coverage=1 00:44:53.123 --rc genhtml_legend=1 00:44:53.123 --rc geninfo_all_blocks=1 00:44:53.123 --rc geninfo_unexecuted_blocks=1 00:44:53.123 00:44:53.123 ' 00:44:53.123 11:56:52 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:53.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:53.123 --rc genhtml_branch_coverage=1 00:44:53.123 --rc genhtml_function_coverage=1 00:44:53.123 --rc genhtml_legend=1 00:44:53.123 --rc geninfo_all_blocks=1 00:44:53.123 --rc geninfo_unexecuted_blocks=1 00:44:53.123 00:44:53.123 ' 00:44:53.123 11:56:52 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:53.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:53.123 --rc genhtml_branch_coverage=1 00:44:53.123 --rc genhtml_function_coverage=1 00:44:53.123 --rc genhtml_legend=1 00:44:53.123 --rc geninfo_all_blocks=1 00:44:53.123 --rc geninfo_unexecuted_blocks=1 00:44:53.123 00:44:53.123 ' 00:44:53.123 11:56:52 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:53.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:53.123 --rc genhtml_branch_coverage=1 00:44:53.123 --rc genhtml_function_coverage=1 00:44:53.123 --rc genhtml_legend=1 00:44:53.123 --rc geninfo_all_blocks=1 00:44:53.123 --rc geninfo_unexecuted_blocks=1 00:44:53.123 00:44:53.123 ' 00:44:53.123 11:56:52 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:53.123 11:56:52 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:53.123 11:56:52 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:53.123 11:56:52 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:53.123 11:56:52 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:53.123 11:56:52 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:44:53.123 11:56:52 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:53.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:53.123 11:56:52 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:44:53.123 11:56:52 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:44:53.123 11:56:52 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:44:53.123 11:56:52 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:44:53.123 11:56:52 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:53.123 11:56:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:53.123 11:56:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:53.123 11:56:52 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:44:53.123 11:56:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:59.847 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:59.847 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:59.847 Found net devices under 0000:31:00.0: cvl_0_0 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:59.847 Found net devices under 0000:31:00.1: cvl_0_1 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:59.847 11:56:58 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:59.847 11:56:59 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:59.847 11:56:59 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:59.847 11:56:59 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:59.847 11:56:59 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:59.847 11:56:59 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:59.848 11:56:59 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:59.848 11:56:59 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:59.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:59.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:44:59.848 00:44:59.848 --- 10.0.0.2 ping statistics --- 00:44:59.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:59.848 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:44:59.848 11:56:59 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:59.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:59.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:44:59.848 00:44:59.848 --- 10.0.0.1 ping statistics --- 00:44:59.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:59.848 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:44:59.848 11:56:59 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:59.848 11:56:59 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:44:59.848 11:56:59 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:59.848 11:56:59 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:03.144 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:45:03.144 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:45:03.144 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:45:03.144 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:45:03.404 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:45:03.404 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:45:03.404 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:45:03.404 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:45:03.404 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:45:03.404 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:45:03.404 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:45:03.404 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:45:03.404 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:45:03.404 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:45:03.404 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:45:03.404 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:45:03.404 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:45:03.665 11:57:02 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:03.665 11:57:02 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:03.665 11:57:02 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:03.665 11:57:02 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:03.665 11:57:02 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:03.665 11:57:02 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:03.665 11:57:02 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:45:03.665 11:57:02 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:45:03.665 11:57:02 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:03.665 11:57:02 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:03.665 11:57:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:03.665 11:57:02 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2905100 00:45:03.665 11:57:02 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2905100 00:45:03.665 11:57:02 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:45:03.665 11:57:02 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2905100 ']' 00:45:03.665 11:57:02 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:03.665 11:57:02 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:03.665 11:57:02 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:03.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:03.665 11:57:02 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:03.665 11:57:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:03.925 [2024-12-07 11:57:03.050375] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:45:03.925 [2024-12-07 11:57:03.050506] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:03.925 [2024-12-07 11:57:03.199310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:04.186 [2024-12-07 11:57:03.296890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:04.186 [2024-12-07 11:57:03.296931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:04.186 [2024-12-07 11:57:03.296943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:04.186 [2024-12-07 11:57:03.296955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:04.186 [2024-12-07 11:57:03.296966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:04.186 [2024-12-07 11:57:03.298167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:04.447 11:57:03 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:04.447 11:57:03 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:45:04.447 11:57:03 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:04.447 11:57:03 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:04.447 11:57:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:04.709 11:57:03 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:04.709 11:57:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:45:04.709 11:57:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:45:04.709 11:57:03 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.709 11:57:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:04.709 [2024-12-07 11:57:03.834060] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:04.709 11:57:03 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.709 11:57:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:45:04.709 11:57:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:04.709 11:57:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:04.709 11:57:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:04.709 ************************************ 00:45:04.709 START TEST fio_dif_1_default 00:45:04.709 ************************************ 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:04.709 bdev_null0 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.709 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:04.710 [2024-12-07 11:57:03.902399] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:04.710 { 00:45:04.710 "params": { 00:45:04.710 "name": "Nvme$subsystem", 00:45:04.710 "trtype": "$TEST_TRANSPORT", 00:45:04.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:04.710 "adrfam": "ipv4", 00:45:04.710 "trsvcid": "$NVMF_PORT", 00:45:04.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:04.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:04.710 "hdgst": ${hdgst:-false}, 00:45:04.710 "ddgst": ${ddgst:-false} 00:45:04.710 }, 00:45:04.710 "method": "bdev_nvme_attach_controller" 00:45:04.710 } 00:45:04.710 EOF 00:45:04.710 )") 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:04.710 "params": { 00:45:04.710 "name": "Nvme0", 00:45:04.710 "trtype": "tcp", 00:45:04.710 "traddr": "10.0.0.2", 00:45:04.710 "adrfam": "ipv4", 00:45:04.710 "trsvcid": "4420", 00:45:04.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:04.710 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:04.710 "hdgst": false, 00:45:04.710 "ddgst": false 00:45:04.710 }, 00:45:04.710 "method": "bdev_nvme_attach_controller" 00:45:04.710 }' 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:04.710 11:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:05.295 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:05.295 fio-3.35 00:45:05.295 Starting 1 thread 00:45:17.541 00:45:17.541 filename0: (groupid=0, jobs=1): err= 0: pid=2905663: Sat Dec 7 11:57:15 2024 00:45:17.541 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10030msec) 00:45:17.541 slat (nsec): min=6028, max=49931, avg=8258.55, stdev=2935.12 00:45:17.541 clat (usec): min=866, max=43036, avg=41421.01, stdev=2672.79 00:45:17.541 lat (usec): min=873, max=43046, avg=41429.26, stdev=2672.87 00:45:17.541 clat percentiles (usec): 00:45:17.541 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:45:17.541 | 30.00th=[41157], 40.00th=[41157], 50.00th=[42206], 60.00th=[42206], 00:45:17.541 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:45:17.541 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:45:17.541 | 99.99th=[43254] 00:45:17.541 bw ( KiB/s): min= 352, max= 416, per=99.73%, avg=385.60, stdev=12.61, samples=20 00:45:17.541 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:45:17.541 lat (usec) : 1000=0.41% 00:45:17.541 lat (msec) : 50=99.59% 00:45:17.541 cpu : usr=94.38%, sys=5.35%, ctx=13, majf=0, minf=1633 00:45:17.541 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:17.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.541 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.541 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:17.541 00:45:17.541 Run status group 0 (all jobs): 00:45:17.541 READ: bw=386KiB/s (395kB/s), 386KiB/s-386KiB/s (395kB/s-395kB/s), io=3872KiB (3965kB), run=10030-10030msec 00:45:17.541 ----------------------------------------------------- 00:45:17.541 Suppressions used: 00:45:17.541 count bytes template 00:45:17.541 1 8 /usr/src/fio/parse.c 00:45:17.541 1 8 libtcmalloc_minimal.so 00:45:17.541 1 904 libcrypto.so 00:45:17.541 ----------------------------------------------------- 00:45:17.541 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.541 00:45:17.541 real 0m12.432s 00:45:17.541 user 0m26.263s 00:45:17.541 sys 0m1.164s 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:17.541 ************************************ 00:45:17.541 END TEST fio_dif_1_default 00:45:17.541 ************************************ 00:45:17.541 11:57:16 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:45:17.541 11:57:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:17.541 11:57:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:17.541 11:57:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:17.541 ************************************ 00:45:17.541 START TEST fio_dif_1_multi_subsystems 00:45:17.541 ************************************ 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:45:17.541 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:17.542 bdev_null0 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:17.542 [2024-12-07 11:57:16.414403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:17.542 bdev_null1 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:17.542 { 00:45:17.542 "params": { 00:45:17.542 "name": "Nvme$subsystem", 00:45:17.542 "trtype": "$TEST_TRANSPORT", 00:45:17.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:17.542 "adrfam": "ipv4", 00:45:17.542 "trsvcid": "$NVMF_PORT", 00:45:17.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:17.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:17.542 "hdgst": ${hdgst:-false}, 00:45:17.542 "ddgst": ${ddgst:-false} 00:45:17.542 }, 00:45:17.542 "method": "bdev_nvme_attach_controller" 00:45:17.542 } 00:45:17.542 EOF 00:45:17.542 )") 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:17.542 { 00:45:17.542 "params": { 00:45:17.542 "name": "Nvme$subsystem", 00:45:17.542 "trtype": "$TEST_TRANSPORT", 00:45:17.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:17.542 "adrfam": "ipv4", 00:45:17.542 "trsvcid": "$NVMF_PORT", 00:45:17.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:17.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:17.542 "hdgst": ${hdgst:-false}, 00:45:17.542 "ddgst": ${ddgst:-false} 00:45:17.542 }, 00:45:17.542 "method": "bdev_nvme_attach_controller" 00:45:17.542 } 00:45:17.542 EOF 00:45:17.542 )") 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:17.542 "params": { 00:45:17.542 "name": "Nvme0", 00:45:17.542 "trtype": "tcp", 00:45:17.542 "traddr": "10.0.0.2", 00:45:17.542 "adrfam": "ipv4", 00:45:17.542 "trsvcid": "4420", 00:45:17.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:17.542 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:17.542 "hdgst": false, 00:45:17.542 "ddgst": false 00:45:17.542 }, 00:45:17.542 "method": "bdev_nvme_attach_controller" 00:45:17.542 },{ 00:45:17.542 "params": { 00:45:17.542 "name": "Nvme1", 00:45:17.542 "trtype": "tcp", 00:45:17.542 "traddr": "10.0.0.2", 00:45:17.542 "adrfam": "ipv4", 00:45:17.542 "trsvcid": "4420", 00:45:17.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:17.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:17.542 "hdgst": false, 00:45:17.542 "ddgst": false 00:45:17.542 }, 00:45:17.542 "method": "bdev_nvme_attach_controller" 00:45:17.542 }' 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:17.542 11:57:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:17.805 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:17.805 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:17.805 fio-3.35 00:45:17.805 Starting 2 threads 00:45:30.080 00:45:30.080 filename0: (groupid=0, jobs=1): err= 0: pid=2908629: Sat Dec 7 11:57:27 2024 00:45:30.080 read: IOPS=95, BW=383KiB/s (393kB/s)(3840KiB/10018msec) 00:45:30.080 slat (nsec): min=6045, max=50683, avg=8507.37, stdev=3492.23 00:45:30.080 clat (usec): min=40723, max=43012, avg=41715.48, stdev=616.23 00:45:30.080 lat (usec): min=40729, max=43022, avg=41723.98, stdev=616.40 00:45:30.080 clat percentiles (usec): 00:45:30.080 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:45:30.080 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:45:30.080 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:45:30.080 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:45:30.080 | 99.99th=[43254] 00:45:30.080 bw ( KiB/s): min= 352, max= 416, per=49.73%, avg=382.40, stdev=12.61, samples=20 00:45:30.080 iops : min= 88, max= 104, avg=95.60, stdev= 3.15, samples=20 00:45:30.080 lat (msec) : 50=100.00% 00:45:30.080 cpu : usr=95.82%, sys=3.92%, ctx=12, majf=0, minf=1633 00:45:30.080 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:30.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:30.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:30.080 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:30.080 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:30.080 filename1: (groupid=0, jobs=1): err= 0: pid=2908630: Sat Dec 7 11:57:27 2024 00:45:30.080 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10040msec) 00:45:30.080 slat (nsec): min=6041, max=48198, avg=8559.29, stdev=3487.86 00:45:30.080 clat (usec): min=964, max=43038, avg=41459.43, stdev=3742.57 00:45:30.080 lat (usec): min=971, max=43049, avg=41467.99, stdev=3742.62 00:45:30.080 clat percentiles (usec): 00:45:30.080 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:45:30.080 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:45:30.080 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:45:30.080 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:45:30.080 | 99.99th=[43254] 00:45:30.080 bw ( KiB/s): min= 352, max= 416, per=50.12%, avg=385.60, stdev=12.61, samples=20 00:45:30.080 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:45:30.080 lat (usec) : 1000=0.62% 00:45:30.080 lat (msec) : 2=0.21%, 50=99.17% 00:45:30.080 cpu : usr=95.85%, sys=3.89%, ctx=12, majf=0, minf=1632 00:45:30.080 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:30.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:30.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:30.080 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:30.080 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:30.080 00:45:30.080 Run status group 0 (all jobs): 00:45:30.080 READ: bw=768KiB/s (787kB/s), 383KiB/s-386KiB/s (393kB/s-395kB/s), io=7712KiB (7897kB), run=10018-10040msec 00:45:30.080 ----------------------------------------------------- 00:45:30.080 Suppressions used: 00:45:30.080 count bytes template 00:45:30.080 2 16 /usr/src/fio/parse.c 00:45:30.080 1 8 libtcmalloc_minimal.so 00:45:30.080 1 904 libcrypto.so 00:45:30.080 ----------------------------------------------------- 00:45:30.080 00:45:30.080 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.081 00:45:30.081 real 0m12.519s 00:45:30.081 user 0m33.778s 00:45:30.081 sys 0m1.424s 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:30.081 11:57:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:30.081 ************************************ 00:45:30.081 END TEST fio_dif_1_multi_subsystems 00:45:30.081 ************************************ 00:45:30.081 11:57:28 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:30.081 11:57:28 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:30.081 11:57:28 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:30.081 11:57:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:30.081 ************************************ 00:45:30.081 START TEST fio_dif_rand_params 00:45:30.081 ************************************ 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:30.081 bdev_null0 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.081 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:30.081 [2024-12-07 11:57:29.012115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:30.081 { 00:45:30.081 "params": { 00:45:30.081 "name": "Nvme$subsystem", 00:45:30.081 "trtype": "$TEST_TRANSPORT", 00:45:30.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:30.081 "adrfam": "ipv4", 00:45:30.081 "trsvcid": "$NVMF_PORT", 00:45:30.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:30.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:30.081 "hdgst": ${hdgst:-false}, 00:45:30.081 "ddgst": ${ddgst:-false} 00:45:30.081 }, 00:45:30.081 "method": "bdev_nvme_attach_controller" 00:45:30.081 } 00:45:30.081 EOF 00:45:30.081 )") 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:30.081 "params": { 00:45:30.081 "name": "Nvme0", 00:45:30.081 "trtype": "tcp", 00:45:30.081 "traddr": "10.0.0.2", 00:45:30.081 "adrfam": "ipv4", 00:45:30.081 "trsvcid": "4420", 00:45:30.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:30.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:30.081 "hdgst": false, 00:45:30.081 "ddgst": false 00:45:30.081 }, 00:45:30.081 "method": "bdev_nvme_attach_controller" 00:45:30.081 }' 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:30.081 11:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:30.344 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:30.344 ... 00:45:30.344 fio-3.35 00:45:30.344 Starting 3 threads 00:45:36.931 00:45:36.931 filename0: (groupid=0, jobs=1): err= 0: pid=2911106: Sat Dec 7 11:57:35 2024 00:45:36.931 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(133MiB/5048msec) 00:45:36.931 slat (nsec): min=6261, max=35929, avg=11701.68, stdev=1985.37 00:45:36.931 clat (usec): min=8212, max=53274, avg=14150.86, stdev=5024.06 00:45:36.931 lat (usec): min=8219, max=53287, avg=14162.56, stdev=5024.07 00:45:36.931 clat percentiles (usec): 00:45:36.931 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[11076], 00:45:36.931 | 30.00th=[11731], 40.00th=[12911], 50.00th=[13960], 60.00th=[14746], 00:45:36.931 | 70.00th=[15533], 80.00th=[16188], 90.00th=[17171], 95.00th=[17957], 00:45:36.931 | 99.00th=[49546], 99.50th=[51119], 99.90th=[53216], 99.95th=[53216], 00:45:36.931 | 99.99th=[53216] 00:45:36.931 bw ( KiB/s): min=23296, max=29952, per=34.40%, avg=27212.80, stdev=2458.58, samples=10 00:45:36.931 iops : min= 182, max= 234, avg=212.60, stdev=19.21, samples=10 00:45:36.931 lat (msec) : 10=8.91%, 20=89.59%, 50=0.75%, 100=0.75% 00:45:36.931 cpu : usr=95.21%, sys=4.52%, ctx=7, majf=0, minf=1635 00:45:36.931 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:36.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.931 issued rwts: total=1066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.931 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:36.931 filename0: (groupid=0, jobs=1): err= 0: pid=2911107: Sat Dec 7 11:57:35 2024 00:45:36.931 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(146MiB/5046msec) 00:45:36.931 slat (nsec): min=7902, max=45777, avg=12187.54, stdev=1874.62 00:45:36.931 clat (usec): min=6376, max=58472, avg=12885.20, stdev=4360.41 00:45:36.931 lat (usec): min=6388, max=58481, avg=12897.39, stdev=4360.55 00:45:36.931 clat percentiles (usec): 00:45:36.931 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10028], 00:45:36.931 | 30.00th=[10945], 40.00th=[11863], 50.00th=[12649], 60.00th=[13435], 00:45:36.931 | 70.00th=[14091], 80.00th=[15008], 90.00th=[16057], 95.00th=[16909], 00:45:36.931 | 99.00th=[19268], 99.50th=[54789], 99.90th=[57410], 99.95th=[58459], 00:45:36.931 | 99.99th=[58459] 00:45:36.931 bw ( KiB/s): min=25856, max=35072, per=37.79%, avg=29900.80, stdev=3359.14, samples=10 00:45:36.931 iops : min= 202, max= 274, avg=233.60, stdev=26.24, samples=10 00:45:36.931 lat (msec) : 10=20.17%, 20=79.06%, 50=0.09%, 100=0.68% 00:45:36.931 cpu : usr=94.51%, sys=5.21%, ctx=7, majf=0, minf=1633 00:45:36.931 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:36.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.931 issued rwts: total=1170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.931 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:36.931 filename0: (groupid=0, jobs=1): err= 0: pid=2911108: Sat Dec 7 11:57:35 2024 00:45:36.931 read: IOPS=176, BW=22.1MiB/s (23.2MB/s)(111MiB/5002msec) 00:45:36.931 slat (nsec): min=6183, max=45546, avg=11280.37, stdev=2269.52 00:45:36.931 clat (usec): min=7165, max=94351, avg=16960.00, stdev=12866.22 00:45:36.931 lat (usec): min=7175, max=94360, avg=16971.28, stdev=12866.15 00:45:36.931 clat percentiles (usec): 00:45:36.931 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11207], 00:45:36.931 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12911], 60.00th=[13566], 00:45:36.932 | 70.00th=[14091], 80.00th=[15139], 90.00th=[48497], 95.00th=[53216], 00:45:36.932 | 99.00th=[55837], 99.50th=[56361], 99.90th=[93848], 99.95th=[93848], 00:45:36.932 | 99.99th=[93848] 00:45:36.932 bw ( KiB/s): min=11776, max=30976, per=28.54%, avg=22579.20, stdev=6293.64, samples=10 00:45:36.932 iops : min= 92, max= 242, avg=176.40, stdev=49.17, samples=10 00:45:36.932 lat (msec) : 10=6.00%, 20=83.71%, 50=0.79%, 100=9.50% 00:45:36.932 cpu : usr=94.54%, sys=5.20%, ctx=10, majf=0, minf=1638 00:45:36.932 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:36.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.932 issued rwts: total=884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.932 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:36.932 00:45:36.932 Run status group 0 (all jobs): 00:45:36.932 READ: bw=77.3MiB/s (81.0MB/s), 22.1MiB/s-29.0MiB/s (23.2MB/s-30.4MB/s), io=390MiB (409MB), run=5002-5048msec 00:45:36.932 ----------------------------------------------------- 00:45:36.932 Suppressions used: 00:45:36.932 count bytes template 00:45:36.932 5 44 /usr/src/fio/parse.c 00:45:36.932 1 8 libtcmalloc_minimal.so 00:45:36.932 1 904 libcrypto.so 00:45:36.932 ----------------------------------------------------- 00:45:36.932 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.932 bdev_null0 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.932 [2024-12-07 11:57:36.234414] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.932 bdev_null1 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:36.932 bdev_null2 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.932 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:37.193 { 00:45:37.193 "params": { 00:45:37.193 "name": "Nvme$subsystem", 00:45:37.193 "trtype": "$TEST_TRANSPORT", 00:45:37.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:37.193 "adrfam": "ipv4", 00:45:37.193 "trsvcid": "$NVMF_PORT", 00:45:37.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:37.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:37.193 "hdgst": ${hdgst:-false}, 00:45:37.193 "ddgst": ${ddgst:-false} 00:45:37.193 }, 00:45:37.193 "method": "bdev_nvme_attach_controller" 00:45:37.193 } 00:45:37.193 EOF 00:45:37.193 )") 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:37.193 { 00:45:37.193 "params": { 00:45:37.193 "name": "Nvme$subsystem", 00:45:37.193 "trtype": "$TEST_TRANSPORT", 00:45:37.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:37.193 "adrfam": "ipv4", 00:45:37.193 "trsvcid": "$NVMF_PORT", 00:45:37.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:37.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:37.193 "hdgst": ${hdgst:-false}, 00:45:37.193 "ddgst": ${ddgst:-false} 00:45:37.193 }, 00:45:37.193 "method": "bdev_nvme_attach_controller" 00:45:37.193 } 00:45:37.193 EOF 00:45:37.193 )") 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:37.193 { 00:45:37.193 "params": { 00:45:37.193 "name": "Nvme$subsystem", 00:45:37.193 "trtype": "$TEST_TRANSPORT", 00:45:37.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:37.193 "adrfam": "ipv4", 00:45:37.193 "trsvcid": "$NVMF_PORT", 00:45:37.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:37.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:37.193 "hdgst": ${hdgst:-false}, 00:45:37.193 "ddgst": ${ddgst:-false} 00:45:37.193 }, 00:45:37.193 "method": "bdev_nvme_attach_controller" 00:45:37.193 } 00:45:37.193 EOF 00:45:37.193 )") 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:37.193 "params": { 00:45:37.193 "name": "Nvme0", 00:45:37.193 "trtype": "tcp", 00:45:37.193 "traddr": "10.0.0.2", 00:45:37.193 "adrfam": "ipv4", 00:45:37.193 "trsvcid": "4420", 00:45:37.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:37.193 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:37.193 "hdgst": false, 00:45:37.193 "ddgst": false 00:45:37.193 }, 00:45:37.193 "method": "bdev_nvme_attach_controller" 00:45:37.193 },{ 00:45:37.193 "params": { 00:45:37.193 "name": "Nvme1", 00:45:37.193 "trtype": "tcp", 00:45:37.193 "traddr": "10.0.0.2", 00:45:37.193 "adrfam": "ipv4", 00:45:37.193 "trsvcid": "4420", 00:45:37.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:37.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:37.193 "hdgst": false, 00:45:37.193 "ddgst": false 00:45:37.193 }, 00:45:37.193 "method": "bdev_nvme_attach_controller" 00:45:37.193 },{ 00:45:37.193 "params": { 00:45:37.193 "name": "Nvme2", 00:45:37.193 "trtype": "tcp", 00:45:37.193 "traddr": "10.0.0.2", 00:45:37.193 "adrfam": "ipv4", 00:45:37.193 "trsvcid": "4420", 00:45:37.193 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:37.193 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:37.193 "hdgst": false, 00:45:37.193 "ddgst": false 00:45:37.193 }, 00:45:37.193 "method": "bdev_nvme_attach_controller" 00:45:37.193 }' 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:37.193 11:57:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.454 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:37.454 ... 00:45:37.454 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:37.454 ... 00:45:37.454 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:37.454 ... 00:45:37.454 fio-3.35 00:45:37.454 Starting 24 threads 00:45:49.684 00:45:49.684 filename0: (groupid=0, jobs=1): err= 0: pid=2912785: Sat Dec 7 11:57:48 2024 00:45:49.684 read: IOPS=446, BW=1784KiB/s (1827kB/s)(17.5MiB/10020msec) 00:45:49.684 slat (nsec): min=6227, max=43205, avg=8899.40, stdev=3134.47 00:45:49.684 clat (usec): min=5234, max=52497, avg=35784.06, stdev=4479.03 00:45:49.684 lat (usec): min=5245, max=52506, avg=35792.96, stdev=4478.74 00:45:49.684 clat percentiles (usec): 00:45:49.684 | 1.00th=[ 9110], 5.00th=[24249], 10.00th=[35914], 20.00th=[36439], 00:45:49.684 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.684 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:45:49.684 | 99.00th=[38536], 99.50th=[40109], 99.90th=[45876], 99.95th=[51643], 00:45:49.684 | 99.99th=[52691] 00:45:49.684 bw ( KiB/s): min= 1660, max= 2224, per=4.32%, avg=1781.00, stdev=118.59, samples=20 00:45:49.684 iops : min= 415, max= 556, avg=445.25, stdev=29.65, samples=20 00:45:49.684 lat (msec) : 10=1.03%, 20=0.04%, 50=98.84%, 100=0.09% 00:45:49.684 cpu : usr=98.71%, sys=0.95%, ctx=16, majf=0, minf=1634 00:45:49.684 IO depths : 1=5.9%, 2=12.0%, 4=24.7%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:45:49.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.684 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.684 issued rwts: total=4470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.684 filename0: (groupid=0, jobs=1): err= 0: pid=2912786: Sat Dec 7 11:57:48 2024 00:45:49.684 read: IOPS=427, BW=1709KiB/s (1750kB/s)(16.8MiB/10037msec) 00:45:49.684 slat (nsec): min=6478, max=84625, avg=23368.02, stdev=10286.98 00:45:49.684 clat (usec): min=35352, max=85822, avg=37240.77, stdev=3790.57 00:45:49.684 lat (usec): min=35370, max=85842, avg=37264.13, stdev=3790.36 00:45:49.684 clat percentiles (usec): 00:45:49.684 | 1.00th=[35914], 5.00th=[36439], 10.00th=[36439], 20.00th=[36439], 00:45:49.684 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.684 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:45:49.684 | 99.00th=[38536], 99.50th=[74974], 99.90th=[85459], 99.95th=[85459], 00:45:49.684 | 99.99th=[85459] 00:45:49.684 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1708.40, stdev=86.09, samples=20 00:45:49.684 iops : min= 384, max= 448, avg=427.10, stdev=21.52, samples=20 00:45:49.684 lat (msec) : 50=99.25%, 100=0.75% 00:45:49.684 cpu : usr=98.73%, sys=0.93%, ctx=13, majf=0, minf=1632 00:45:49.684 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:49.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.684 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.684 filename0: (groupid=0, jobs=1): err= 0: pid=2912787: Sat Dec 7 11:57:48 2024 00:45:49.684 read: IOPS=427, BW=1711KiB/s (1752kB/s)(16.9MiB/10096msec) 00:45:49.684 slat (nsec): min=5977, max=78791, avg=13581.38, stdev=9330.71 00:45:49.684 clat (usec): min=14464, max=96123, avg=37269.89, stdev=4780.24 00:45:49.684 lat (usec): min=14474, max=96135, avg=37283.47, stdev=4779.85 00:45:49.684 clat percentiles (usec): 00:45:49.684 | 1.00th=[25035], 5.00th=[35914], 10.00th=[36439], 20.00th=[36439], 00:45:49.684 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.684 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:45:49.684 | 99.00th=[50594], 99.50th=[80217], 99.90th=[95945], 99.95th=[95945], 00:45:49.684 | 99.99th=[95945] 00:45:49.684 bw ( KiB/s): min= 1648, max= 1792, per=4.17%, avg=1716.75, stdev=63.44, samples=20 00:45:49.684 iops : min= 412, max= 448, avg=429.15, stdev=15.87, samples=20 00:45:49.684 lat (msec) : 20=0.16%, 50=98.61%, 100=1.23% 00:45:49.684 cpu : usr=98.86%, sys=0.80%, ctx=19, majf=0, minf=1636 00:45:49.684 IO depths : 1=5.6%, 2=11.9%, 4=24.9%, 8=50.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:45:49.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.684 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.684 issued rwts: total=4319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.684 filename0: (groupid=0, jobs=1): err= 0: pid=2912788: Sat Dec 7 11:57:48 2024 00:45:49.684 read: IOPS=426, BW=1708KiB/s (1749kB/s)(16.8MiB/10044msec) 00:45:49.684 slat (usec): min=6, max=103, avg=28.40, stdev=15.48 00:45:49.684 clat (usec): min=25083, max=96538, avg=37224.46, stdev=4282.61 00:45:49.684 lat (usec): min=25095, max=96554, avg=37252.86, stdev=4282.23 00:45:49.684 clat percentiles (usec): 00:45:49.684 | 1.00th=[35914], 5.00th=[35914], 10.00th=[36439], 20.00th=[36439], 00:45:49.684 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.684 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:45:49.684 | 99.00th=[39584], 99.50th=[71828], 99.90th=[95945], 99.95th=[96994], 00:45:49.684 | 99.99th=[96994] 00:45:49.684 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1708.35, stdev=85.50, samples=20 00:45:49.684 iops : min= 384, max= 448, avg=427.05, stdev=21.39, samples=20 00:45:49.684 lat (msec) : 50=99.25%, 100=0.75% 00:45:49.684 cpu : usr=98.42%, sys=0.99%, ctx=112, majf=0, minf=1633 00:45:49.684 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:49.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.684 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.684 filename0: (groupid=0, jobs=1): err= 0: pid=2912790: Sat Dec 7 11:57:48 2024 00:45:49.684 read: IOPS=434, BW=1739KiB/s (1781kB/s)(17.1MiB/10082msec) 00:45:49.684 slat (nsec): min=6209, max=93667, avg=19462.32, stdev=12343.24 00:45:49.684 clat (usec): min=5218, max=85186, avg=36633.04, stdev=4702.74 00:45:49.684 lat (usec): min=5231, max=85204, avg=36652.50, stdev=4703.82 00:45:49.684 clat percentiles (usec): 00:45:49.684 | 1.00th=[11338], 5.00th=[35914], 10.00th=[36439], 20.00th=[36439], 00:45:49.684 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.684 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:45:49.684 | 99.00th=[38536], 99.50th=[45351], 99.90th=[85459], 99.95th=[85459], 00:45:49.684 | 99.99th=[85459] 00:45:49.684 bw ( KiB/s): min= 1660, max= 2176, per=4.24%, avg=1746.60, stdev=119.69, samples=20 00:45:49.684 iops : min= 415, max= 544, avg=436.65, stdev=29.92, samples=20 00:45:49.684 lat (msec) : 10=0.89%, 20=0.87%, 50=97.88%, 100=0.36% 00:45:49.684 cpu : usr=98.87%, sys=0.82%, ctx=8, majf=0, minf=1636 00:45:49.684 IO depths : 1=6.0%, 2=12.0%, 4=24.7%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:45:49.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.684 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.684 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.684 filename0: (groupid=0, jobs=1): err= 0: pid=2912791: Sat Dec 7 11:57:48 2024 00:45:49.684 read: IOPS=432, BW=1732KiB/s (1773kB/s)(17.0MiB/10044msec) 00:45:49.684 slat (usec): min=6, max=131, avg=24.29, stdev=16.17 00:45:49.684 clat (usec): min=20918, max=96736, avg=36735.48, stdev=5429.35 00:45:49.684 lat (usec): min=20975, max=96772, avg=36759.76, stdev=5430.21 00:45:49.684 clat percentiles (usec): 00:45:49.684 | 1.00th=[23725], 5.00th=[30016], 10.00th=[35390], 20.00th=[36439], 00:45:49.684 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:45:49.684 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38536], 00:45:49.685 | 99.00th=[56886], 99.50th=[71828], 99.90th=[96994], 99.95th=[96994], 00:45:49.685 | 99.99th=[96994] 00:45:49.685 bw ( KiB/s): min= 1536, max= 1888, per=4.21%, avg=1732.35, stdev=98.99, samples=20 00:45:49.685 iops : min= 384, max= 472, avg=433.05, stdev=24.77, samples=20 00:45:49.685 lat (msec) : 50=98.57%, 100=1.43% 00:45:49.685 cpu : usr=98.62%, sys=0.88%, ctx=76, majf=0, minf=1635 00:45:49.685 IO depths : 1=4.7%, 2=9.6%, 4=20.4%, 8=56.9%, 16=8.4%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=92.9%, 8=1.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename0: (groupid=0, jobs=1): err= 0: pid=2912792: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=427, BW=1709KiB/s (1750kB/s)(16.8MiB/10050msec) 00:45:49.685 slat (nsec): min=5047, max=93244, avg=20645.11, stdev=14278.02 00:45:49.685 clat (msec): min=19, max=119, avg=37.33, stdev= 6.61 00:45:49.685 lat (msec): min=19, max=119, avg=37.35, stdev= 6.60 00:45:49.685 clat percentiles (msec): 00:45:49.685 | 1.00th=[ 24], 5.00th=[ 30], 10.00th=[ 34], 20.00th=[ 37], 00:45:49.685 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:45:49.685 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 39], 95.00th=[ 44], 00:45:49.685 | 99.00th=[ 59], 99.50th=[ 101], 99.90th=[ 102], 99.95th=[ 102], 00:45:49.685 | 99.99th=[ 120] 00:45:49.685 bw ( KiB/s): min= 1472, max= 1856, per=4.15%, avg=1710.40, stdev=81.79, samples=20 00:45:49.685 iops : min= 368, max= 464, avg=427.60, stdev=20.45, samples=20 00:45:49.685 lat (msec) : 20=0.09%, 50=97.76%, 100=1.63%, 250=0.51% 00:45:49.685 cpu : usr=98.92%, sys=0.80%, ctx=14, majf=0, minf=1633 00:45:49.685 IO depths : 1=0.1%, 2=0.5%, 4=2.9%, 8=79.3%, 16=17.2%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=89.7%, 8=9.1%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename0: (groupid=0, jobs=1): err= 0: pid=2912794: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=428, BW=1715KiB/s (1756kB/s)(16.9MiB/10075msec) 00:45:49.685 slat (nsec): min=6396, max=67764, avg=15033.50, stdev=8298.57 00:45:49.685 clat (usec): min=20444, max=90837, avg=37190.98, stdev=3862.06 00:45:49.685 lat (usec): min=20452, max=90850, avg=37206.01, stdev=3861.88 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[32113], 5.00th=[35914], 10.00th=[36439], 20.00th=[36439], 00:45:49.685 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:45:49.685 | 99.00th=[51119], 99.50th=[54264], 99.90th=[90702], 99.95th=[90702], 00:45:49.685 | 99.99th=[90702] 00:45:49.685 bw ( KiB/s): min= 1660, max= 1792, per=4.18%, avg=1721.20, stdev=64.26, samples=20 00:45:49.685 iops : min= 415, max= 448, avg=430.30, stdev=16.07, samples=20 00:45:49.685 lat (msec) : 50=98.87%, 100=1.13% 00:45:49.685 cpu : usr=98.78%, sys=0.85%, ctx=79, majf=0, minf=1633 00:45:49.685 IO depths : 1=5.8%, 2=11.9%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename1: (groupid=0, jobs=1): err= 0: pid=2912795: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=426, BW=1705KiB/s (1746kB/s)(16.8MiB/10060msec) 00:45:49.685 slat (usec): min=4, max=102, avg=29.52, stdev=15.05 00:45:49.685 clat (msec): min=20, max=102, avg=37.24, stdev= 4.73 00:45:49.685 lat (msec): min=20, max=102, avg=37.27, stdev= 4.73 00:45:49.685 clat percentiles (msec): 00:45:49.685 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:45:49.685 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:45:49.685 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 38], 95.00th=[ 39], 00:45:49.685 | 99.00th=[ 40], 99.50th=[ 83], 99.90th=[ 97], 99.95th=[ 97], 00:45:49.685 | 99.99th=[ 104] 00:45:49.685 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1708.90, stdev=85.58, samples=20 00:45:49.685 iops : min= 384, max= 448, avg=427.15, stdev=21.49, samples=20 00:45:49.685 lat (msec) : 50=99.04%, 100=0.91%, 250=0.05% 00:45:49.685 cpu : usr=98.86%, sys=0.79%, ctx=39, majf=0, minf=1633 00:45:49.685 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename1: (groupid=0, jobs=1): err= 0: pid=2912796: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=436, BW=1744KiB/s (1786kB/s)(17.1MiB/10018msec) 00:45:49.685 slat (nsec): min=6423, max=75106, avg=18142.49, stdev=9190.55 00:45:49.685 clat (usec): min=7977, max=59140, avg=36530.54, stdev=3570.88 00:45:49.685 lat (usec): min=7986, max=59150, avg=36548.68, stdev=3571.55 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[14615], 5.00th=[35914], 10.00th=[36439], 20.00th=[36439], 00:45:49.685 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:45:49.685 | 99.00th=[39584], 99.50th=[45876], 99.90th=[52691], 99.95th=[55837], 00:45:49.685 | 99.99th=[58983] 00:45:49.685 bw ( KiB/s): min= 1660, max= 2048, per=4.22%, avg=1740.20, stdev=95.48, samples=20 00:45:49.685 iops : min= 415, max= 512, avg=435.05, stdev=23.87, samples=20 00:45:49.685 lat (msec) : 10=0.73%, 20=0.78%, 50=98.12%, 100=0.37% 00:45:49.685 cpu : usr=98.83%, sys=0.78%, ctx=50, majf=0, minf=1635 00:45:49.685 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename1: (groupid=0, jobs=1): err= 0: pid=2912797: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=435, BW=1740KiB/s (1782kB/s)(17.1MiB/10082msec) 00:45:49.685 slat (usec): min=6, max=117, avg=24.52, stdev=14.43 00:45:49.685 clat (usec): min=5498, max=85260, avg=36555.99, stdev=4743.55 00:45:49.685 lat (usec): min=5510, max=85301, avg=36580.50, stdev=4744.59 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[ 9110], 5.00th=[35914], 10.00th=[36439], 20.00th=[36439], 00:45:49.685 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:45:49.685 | 99.00th=[38536], 99.50th=[49546], 99.90th=[85459], 99.95th=[85459], 00:45:49.685 | 99.99th=[85459] 00:45:49.685 bw ( KiB/s): min= 1660, max= 2192, per=4.24%, avg=1747.40, stdev=122.72, samples=20 00:45:49.685 iops : min= 415, max= 548, avg=436.85, stdev=30.68, samples=20 00:45:49.685 lat (msec) : 10=1.00%, 20=0.80%, 50=97.77%, 100=0.43% 00:45:49.685 cpu : usr=98.75%, sys=0.89%, ctx=29, majf=0, minf=1634 00:45:49.685 IO depths : 1=6.0%, 2=12.0%, 4=24.5%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename1: (groupid=0, jobs=1): err= 0: pid=2912799: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=429, BW=1718KiB/s (1760kB/s)(16.9MiB/10046msec) 00:45:49.685 slat (nsec): min=6427, max=82785, avg=25925.65, stdev=13580.98 00:45:49.685 clat (usec): min=22906, max=91577, avg=37012.22, stdev=4201.28 00:45:49.685 lat (usec): min=22916, max=91623, avg=37038.14, stdev=4201.75 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[26608], 5.00th=[35914], 10.00th=[36439], 20.00th=[36439], 00:45:49.685 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:45:49.685 | 99.00th=[44827], 99.50th=[72877], 99.90th=[85459], 99.95th=[85459], 00:45:49.685 | 99.99th=[91751] 00:45:49.685 bw ( KiB/s): min= 1664, max= 1792, per=4.18%, avg=1720.10, stdev=62.52, samples=20 00:45:49.685 iops : min= 416, max= 448, avg=429.95, stdev=15.65, samples=20 00:45:49.685 lat (msec) : 50=99.07%, 100=0.93% 00:45:49.685 cpu : usr=98.91%, sys=0.73%, ctx=25, majf=0, minf=1633 00:45:49.685 IO depths : 1=5.8%, 2=11.8%, 4=24.3%, 8=51.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename1: (groupid=0, jobs=1): err= 0: pid=2912800: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=440, BW=1761KiB/s (1804kB/s)(17.2MiB/10020msec) 00:45:49.685 slat (nsec): min=6094, max=85848, avg=16095.59, stdev=10603.52 00:45:49.685 clat (usec): min=6529, max=53562, avg=36206.64, stdev=4190.32 00:45:49.685 lat (usec): min=6538, max=53575, avg=36222.74, stdev=4191.07 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[10552], 5.00th=[35914], 10.00th=[36439], 20.00th=[36439], 00:45:49.685 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:45:49.685 | 99.00th=[38536], 99.50th=[39060], 99.90th=[51119], 99.95th=[52167], 00:45:49.685 | 99.99th=[53740] 00:45:49.685 bw ( KiB/s): min= 1660, max= 2404, per=4.27%, avg=1758.00, stdev=164.85, samples=20 00:45:49.685 iops : min= 415, max= 601, avg=439.50, stdev=41.21, samples=20 00:45:49.685 lat (msec) : 10=0.95%, 20=1.27%, 50=97.64%, 100=0.14% 00:45:49.685 cpu : usr=98.90%, sys=0.78%, ctx=12, majf=0, minf=1633 00:45:49.685 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename1: (groupid=0, jobs=1): err= 0: pid=2912801: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=430, BW=1724KiB/s (1765kB/s)(16.9MiB/10053msec) 00:45:49.685 slat (nsec): min=4879, max=78625, avg=18907.98, stdev=11652.77 00:45:49.685 clat (msec): min=21, max=100, avg=36.97, stdev= 6.15 00:45:49.685 lat (msec): min=21, max=100, avg=36.99, stdev= 6.14 00:45:49.685 clat percentiles (msec): 00:45:49.685 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 37], 00:45:49.685 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:45:49.685 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 39], 95.00th=[ 43], 00:45:49.685 | 99.00th=[ 53], 99.50th=[ 91], 99.90th=[ 101], 99.95th=[ 101], 00:45:49.685 | 99.99th=[ 101] 00:45:49.685 bw ( KiB/s): min= 1408, max= 1856, per=4.19%, avg=1725.60, stdev=107.41, samples=20 00:45:49.685 iops : min= 352, max= 464, avg=431.40, stdev=26.85, samples=20 00:45:49.685 lat (msec) : 50=98.57%, 100=1.13%, 250=0.30% 00:45:49.685 cpu : usr=98.84%, sys=0.83%, ctx=13, majf=0, minf=1635 00:45:49.685 IO depths : 1=3.9%, 2=8.1%, 4=17.5%, 8=60.7%, 16=9.7%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=92.3%, 8=3.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename1: (groupid=0, jobs=1): err= 0: pid=2912802: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=427, BW=1711KiB/s (1752kB/s)(16.8MiB/10064msec) 00:45:49.685 slat (nsec): min=4318, max=76851, avg=17224.41, stdev=9407.86 00:45:49.685 clat (usec): min=21194, max=90787, avg=37252.83, stdev=4096.64 00:45:49.685 lat (usec): min=21205, max=90801, avg=37270.05, stdev=4095.93 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[35914], 5.00th=[35914], 10.00th=[36439], 20.00th=[36439], 00:45:49.685 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:45:49.685 | 99.00th=[52167], 99.50th=[63701], 99.90th=[90702], 99.95th=[90702], 00:45:49.685 | 99.99th=[90702] 00:45:49.685 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1714.80, stdev=73.69, samples=20 00:45:49.685 iops : min= 384, max= 448, avg=428.70, stdev=18.42, samples=20 00:45:49.685 lat (msec) : 50=98.61%, 100=1.39% 00:45:49.685 cpu : usr=98.90%, sys=0.76%, ctx=21, majf=0, minf=1633 00:45:49.685 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename1: (groupid=0, jobs=1): err= 0: pid=2912803: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=434, BW=1737KiB/s (1779kB/s)(17.0MiB/10050msec) 00:45:49.685 slat (nsec): min=4547, max=79910, avg=19783.93, stdev=11539.25 00:45:49.685 clat (usec): min=20293, max=96944, avg=36692.22, stdev=6335.06 00:45:49.685 lat (usec): min=20307, max=97008, avg=36712.00, stdev=6335.95 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[23462], 5.00th=[27919], 10.00th=[30802], 20.00th=[36439], 00:45:49.685 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[37487], 90.00th=[38011], 95.00th=[42730], 00:45:49.685 | 99.00th=[53740], 99.50th=[90702], 99.90th=[96994], 99.95th=[96994], 00:45:49.685 | 99.99th=[96994] 00:45:49.685 bw ( KiB/s): min= 1405, max= 1936, per=4.22%, avg=1738.65, stdev=122.53, samples=20 00:45:49.685 iops : min= 351, max= 484, avg=434.65, stdev=30.67, samples=20 00:45:49.685 lat (msec) : 50=98.14%, 100=1.86% 00:45:49.685 cpu : usr=98.82%, sys=0.86%, ctx=14, majf=0, minf=1635 00:45:49.685 IO depths : 1=3.5%, 2=7.3%, 4=16.3%, 8=62.6%, 16=10.2%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=92.0%, 8=3.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename2: (groupid=0, jobs=1): err= 0: pid=2912805: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=427, BW=1710KiB/s (1751kB/s)(16.8MiB/10058msec) 00:45:49.685 slat (nsec): min=6420, max=99735, avg=25492.00, stdev=16020.69 00:45:49.685 clat (usec): min=20557, max=96099, avg=37217.67, stdev=4704.13 00:45:49.685 lat (usec): min=20574, max=96109, avg=37243.16, stdev=4704.25 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[26870], 5.00th=[35914], 10.00th=[36439], 20.00th=[36439], 00:45:49.685 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:45:49.685 | 99.00th=[57410], 99.50th=[63177], 99.90th=[95945], 99.95th=[95945], 00:45:49.685 | 99.99th=[95945] 00:45:49.685 bw ( KiB/s): min= 1552, max= 1792, per=4.16%, avg=1713.20, stdev=73.14, samples=20 00:45:49.685 iops : min= 388, max= 448, avg=428.30, stdev=18.28, samples=20 00:45:49.685 lat (msec) : 50=98.44%, 100=1.56% 00:45:49.685 cpu : usr=98.77%, sys=0.88%, ctx=45, majf=0, minf=1637 00:45:49.685 IO depths : 1=5.4%, 2=11.5%, 4=24.3%, 8=51.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename2: (groupid=0, jobs=1): err= 0: pid=2912806: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=430, BW=1722KiB/s (1764kB/s)(16.9MiB/10046msec) 00:45:49.685 slat (usec): min=4, max=133, avg=23.86, stdev=15.92 00:45:49.685 clat (msec): min=18, max=100, avg=36.96, stdev= 6.21 00:45:49.685 lat (msec): min=18, max=100, avg=36.99, stdev= 6.21 00:45:49.685 clat percentiles (msec): 00:45:49.685 | 1.00th=[ 26], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 37], 00:45:49.685 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:45:49.685 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 39], 95.00th=[ 43], 00:45:49.685 | 99.00th=[ 52], 99.50th=[ 97], 99.90th=[ 101], 99.95th=[ 101], 00:45:49.685 | 99.99th=[ 101] 00:45:49.685 bw ( KiB/s): min= 1408, max= 1904, per=4.18%, avg=1723.20, stdev=107.86, samples=20 00:45:49.685 iops : min= 352, max= 476, avg=430.80, stdev=26.97, samples=20 00:45:49.685 lat (msec) : 20=0.09%, 50=98.75%, 100=0.79%, 250=0.37% 00:45:49.685 cpu : usr=98.81%, sys=0.86%, ctx=17, majf=0, minf=1635 00:45:49.685 IO depths : 1=3.6%, 2=7.3%, 4=15.5%, 8=63.1%, 16=10.5%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=91.8%, 8=4.1%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename2: (groupid=0, jobs=1): err= 0: pid=2912807: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=428, BW=1714KiB/s (1755kB/s)(16.8MiB/10047msec) 00:45:49.685 slat (nsec): min=6261, max=93697, avg=23968.66, stdev=14101.71 00:45:49.685 clat (usec): min=26437, max=85966, avg=37135.89, stdev=3384.57 00:45:49.685 lat (usec): min=26446, max=85978, avg=37159.86, stdev=3383.80 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[35914], 5.00th=[36439], 10.00th=[36439], 20.00th=[36439], 00:45:49.685 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:45:49.685 | 99.00th=[38536], 99.50th=[58459], 99.90th=[85459], 99.95th=[85459], 00:45:49.685 | 99.99th=[85459] 00:45:49.685 bw ( KiB/s): min= 1664, max= 1792, per=4.16%, avg=1715.15, stdev=63.97, samples=20 00:45:49.685 iops : min= 416, max= 448, avg=428.75, stdev=16.02, samples=20 00:45:49.685 lat (msec) : 50=99.26%, 100=0.74% 00:45:49.685 cpu : usr=98.91%, sys=0.74%, ctx=15, majf=0, minf=1634 00:45:49.685 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename2: (groupid=0, jobs=1): err= 0: pid=2912808: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=428, BW=1715KiB/s (1756kB/s)(16.9MiB/10077msec) 00:45:49.685 slat (nsec): min=6396, max=90616, avg=25246.85, stdev=12869.43 00:45:49.685 clat (usec): min=26327, max=85600, avg=37104.06, stdev=3237.93 00:45:49.685 lat (usec): min=26354, max=85617, avg=37129.30, stdev=3237.70 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[35914], 5.00th=[36439], 10.00th=[36439], 20.00th=[36439], 00:45:49.685 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:45:49.685 | 99.00th=[43254], 99.50th=[45351], 99.90th=[85459], 99.95th=[85459], 00:45:49.685 | 99.99th=[85459] 00:45:49.685 bw ( KiB/s): min= 1660, max= 1792, per=4.18%, avg=1721.20, stdev=65.30, samples=20 00:45:49.685 iops : min= 415, max= 448, avg=430.30, stdev=16.33, samples=20 00:45:49.685 lat (msec) : 50=99.58%, 100=0.42% 00:45:49.685 cpu : usr=98.91%, sys=0.75%, ctx=18, majf=0, minf=1635 00:45:49.685 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename2: (groupid=0, jobs=1): err= 0: pid=2912809: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=429, BW=1719KiB/s (1761kB/s)(16.9MiB/10060msec) 00:45:49.685 slat (usec): min=4, max=110, avg=25.10, stdev=16.49 00:45:49.685 clat (usec): min=21336, max=90778, avg=36994.41, stdev=5042.64 00:45:49.685 lat (usec): min=21393, max=90796, avg=37019.51, stdev=5041.99 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[24249], 5.00th=[32113], 10.00th=[35914], 20.00th=[36439], 00:45:49.685 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:45:49.685 | 99.00th=[50594], 99.50th=[83362], 99.90th=[90702], 99.95th=[90702], 00:45:49.685 | 99.99th=[90702] 00:45:49.685 bw ( KiB/s): min= 1536, max= 1808, per=4.18%, avg=1723.15, stdev=76.36, samples=20 00:45:49.685 iops : min= 384, max= 452, avg=430.75, stdev=19.12, samples=20 00:45:49.685 lat (msec) : 50=98.98%, 100=1.02% 00:45:49.685 cpu : usr=98.94%, sys=0.73%, ctx=14, majf=0, minf=1634 00:45:49.685 IO depths : 1=5.0%, 2=10.1%, 4=20.8%, 8=56.0%, 16=8.1%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=93.1%, 8=1.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename2: (groupid=0, jobs=1): err= 0: pid=2912811: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=426, BW=1706KiB/s (1747kB/s)(16.8MiB/10055msec) 00:45:49.685 slat (nsec): min=6109, max=94677, avg=23440.29, stdev=15520.43 00:45:49.685 clat (usec): min=23516, max=96566, avg=37324.29, stdev=4712.91 00:45:49.685 lat (usec): min=23529, max=96579, avg=37347.74, stdev=4711.66 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[35914], 5.00th=[35914], 10.00th=[36439], 20.00th=[36439], 00:45:49.685 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:45:49.685 | 99.00th=[39584], 99.50th=[84411], 99.90th=[96994], 99.95th=[96994], 00:45:49.685 | 99.99th=[96994] 00:45:49.685 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1708.75, stdev=85.66, samples=20 00:45:49.685 iops : min= 384, max= 448, avg=427.15, stdev=21.49, samples=20 00:45:49.685 lat (msec) : 50=99.21%, 100=0.79% 00:45:49.685 cpu : usr=98.78%, sys=0.90%, ctx=14, majf=0, minf=1632 00:45:49.685 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename2: (groupid=0, jobs=1): err= 0: pid=2912812: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=430, BW=1722KiB/s (1764kB/s)(16.9MiB/10052msec) 00:45:49.685 slat (nsec): min=4480, max=84648, avg=24309.12, stdev=12058.05 00:45:49.685 clat (usec): min=15211, max=96871, avg=36947.03, stdev=5677.89 00:45:49.685 lat (usec): min=15238, max=96895, avg=36971.34, stdev=5678.15 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[24773], 5.00th=[30016], 10.00th=[35914], 20.00th=[36439], 00:45:49.685 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38536], 00:45:49.685 | 99.00th=[51643], 99.50th=[92799], 99.90th=[96994], 99.95th=[96994], 00:45:49.685 | 99.99th=[96994] 00:45:49.685 bw ( KiB/s): min= 1456, max= 1952, per=4.19%, avg=1724.60, stdev=112.25, samples=20 00:45:49.685 iops : min= 364, max= 488, avg=431.15, stdev=28.06, samples=20 00:45:49.685 lat (msec) : 20=0.09%, 50=98.80%, 100=1.11% 00:45:49.685 cpu : usr=98.93%, sys=0.75%, ctx=14, majf=0, minf=1631 00:45:49.685 IO depths : 1=5.3%, 2=10.6%, 4=21.9%, 8=54.6%, 16=7.6%, 32=0.0%, >=64=0.0% 00:45:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 complete : 0=0.0%, 4=93.3%, 8=1.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.685 issued rwts: total=4328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.685 filename2: (groupid=0, jobs=1): err= 0: pid=2912813: Sat Dec 7 11:57:48 2024 00:45:49.685 read: IOPS=426, BW=1705KiB/s (1746kB/s)(16.8MiB/10057msec) 00:45:49.685 slat (nsec): min=4433, max=83587, avg=23470.06, stdev=11458.55 00:45:49.685 clat (usec): min=22248, max=98643, avg=37328.48, stdev=4783.50 00:45:49.685 lat (usec): min=22258, max=98662, avg=37351.95, stdev=4783.30 00:45:49.685 clat percentiles (usec): 00:45:49.685 | 1.00th=[35914], 5.00th=[36439], 10.00th=[36439], 20.00th=[36439], 00:45:49.685 | 30.00th=[36439], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:45:49.685 | 70.00th=[36963], 80.00th=[37487], 90.00th=[37487], 95.00th=[38011], 00:45:49.686 | 99.00th=[39584], 99.50th=[85459], 99.90th=[95945], 99.95th=[95945], 00:45:49.686 | 99.99th=[99091] 00:45:49.686 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1708.60, stdev=85.98, samples=20 00:45:49.686 iops : min= 384, max= 448, avg=427.15, stdev=21.49, samples=20 00:45:49.686 lat (msec) : 50=99.21%, 100=0.79% 00:45:49.686 cpu : usr=98.78%, sys=0.88%, ctx=13, majf=0, minf=1633 00:45:49.686 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:49.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:49.686 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:49.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:49.686 00:45:49.686 Run status group 0 (all jobs): 00:45:49.686 READ: bw=40.2MiB/s (42.2MB/s), 1705KiB/s-1784KiB/s (1746kB/s-1827kB/s), io=406MiB (426MB), run=10018-10096msec 00:45:49.947 ----------------------------------------------------- 00:45:49.947 Suppressions used: 00:45:49.947 count bytes template 00:45:49.947 45 402 /usr/src/fio/parse.c 00:45:49.947 1 8 libtcmalloc_minimal.so 00:45:49.947 1 904 libcrypto.so 00:45:49.947 ----------------------------------------------------- 00:45:49.947 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.947 bdev_null0 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.947 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.948 [2024-12-07 11:57:49.242618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.948 bdev_null1 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:49.948 { 00:45:49.948 "params": { 00:45:49.948 "name": "Nvme$subsystem", 00:45:49.948 "trtype": "$TEST_TRANSPORT", 00:45:49.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:49.948 "adrfam": "ipv4", 00:45:49.948 "trsvcid": "$NVMF_PORT", 00:45:49.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:49.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:49.948 "hdgst": ${hdgst:-false}, 00:45:49.948 "ddgst": ${ddgst:-false} 00:45:49.948 }, 00:45:49.948 "method": "bdev_nvme_attach_controller" 00:45:49.948 } 00:45:49.948 EOF 00:45:49.948 )") 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:49.948 { 00:45:49.948 "params": { 00:45:49.948 "name": "Nvme$subsystem", 00:45:49.948 "trtype": "$TEST_TRANSPORT", 00:45:49.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:49.948 "adrfam": "ipv4", 00:45:49.948 "trsvcid": "$NVMF_PORT", 00:45:49.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:49.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:49.948 "hdgst": ${hdgst:-false}, 00:45:49.948 "ddgst": ${ddgst:-false} 00:45:49.948 }, 00:45:49.948 "method": "bdev_nvme_attach_controller" 00:45:49.948 } 00:45:49.948 EOF 00:45:49.948 )") 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:49.948 11:57:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:50.210 11:57:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:50.210 11:57:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:50.210 11:57:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:50.210 "params": { 00:45:50.210 "name": "Nvme0", 00:45:50.210 "trtype": "tcp", 00:45:50.210 "traddr": "10.0.0.2", 00:45:50.210 "adrfam": "ipv4", 00:45:50.210 "trsvcid": "4420", 00:45:50.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:50.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:50.210 "hdgst": false, 00:45:50.210 "ddgst": false 00:45:50.210 }, 00:45:50.210 "method": "bdev_nvme_attach_controller" 00:45:50.210 },{ 00:45:50.210 "params": { 00:45:50.210 "name": "Nvme1", 00:45:50.210 "trtype": "tcp", 00:45:50.210 "traddr": "10.0.0.2", 00:45:50.210 "adrfam": "ipv4", 00:45:50.210 "trsvcid": "4420", 00:45:50.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:50.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:50.210 "hdgst": false, 00:45:50.210 "ddgst": false 00:45:50.210 }, 00:45:50.210 "method": "bdev_nvme_attach_controller" 00:45:50.210 }' 00:45:50.210 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:50.210 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:50.210 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:50.210 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:50.210 11:57:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:50.471 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:50.471 ... 00:45:50.471 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:50.471 ... 00:45:50.471 fio-3.35 00:45:50.471 Starting 4 threads 00:45:57.054 00:45:57.054 filename0: (groupid=0, jobs=1): err= 0: pid=2915343: Sat Dec 7 11:57:55 2024 00:45:57.054 read: IOPS=1872, BW=14.6MiB/s (15.3MB/s)(73.2MiB/5003msec) 00:45:57.054 slat (nsec): min=6125, max=45382, avg=8801.71, stdev=2511.61 00:45:57.054 clat (usec): min=1406, max=6984, avg=4248.18, stdev=379.06 00:45:57.054 lat (usec): min=1424, max=6993, avg=4256.98, stdev=378.93 00:45:57.054 clat percentiles (usec): 00:45:57.054 | 1.00th=[ 3228], 5.00th=[ 3556], 10.00th=[ 3884], 20.00th=[ 4047], 00:45:57.054 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4293], 00:45:57.054 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4686], 00:45:57.054 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 6390], 99.95th=[ 6849], 00:45:57.054 | 99.99th=[ 6980] 00:45:57.054 bw ( KiB/s): min=14400, max=15392, per=25.32%, avg=14961.78, stdev=280.39, samples=9 00:45:57.054 iops : min= 1800, max= 1924, avg=1870.22, stdev=35.05, samples=9 00:45:57.054 lat (msec) : 2=0.10%, 4=12.92%, 10=86.98% 00:45:57.054 cpu : usr=97.80%, sys=1.92%, ctx=6, majf=0, minf=1634 00:45:57.054 IO depths : 1=0.1%, 2=0.3%, 4=67.6%, 8=32.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:57.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.054 complete : 0=0.0%, 4=96.0%, 8=4.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.054 issued rwts: total=9370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:57.054 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:57.054 filename0: (groupid=0, jobs=1): err= 0: pid=2915344: Sat Dec 7 11:57:55 2024 00:45:57.054 read: IOPS=1870, BW=14.6MiB/s (15.3MB/s)(73.1MiB/5002msec) 00:45:57.054 slat (nsec): min=6015, max=45452, avg=8938.64, stdev=2546.66 00:45:57.054 clat (usec): min=1059, max=7488, avg=4254.25, stdev=419.52 00:45:57.054 lat (usec): min=1065, max=7529, avg=4263.19, stdev=419.33 00:45:57.054 clat percentiles (usec): 00:45:57.055 | 1.00th=[ 3130], 5.00th=[ 3589], 10.00th=[ 3884], 20.00th=[ 4047], 00:45:57.055 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4293], 00:45:57.055 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4752], 00:45:57.055 | 99.00th=[ 5866], 99.50th=[ 6259], 99.90th=[ 7111], 99.95th=[ 7177], 00:45:57.055 | 99.99th=[ 7504] 00:45:57.055 bw ( KiB/s): min=14352, max=15344, per=25.30%, avg=14951.11, stdev=279.67, samples=9 00:45:57.055 iops : min= 1794, max= 1918, avg=1868.89, stdev=34.96, samples=9 00:45:57.055 lat (msec) : 2=0.14%, 4=15.42%, 10=84.45% 00:45:57.055 cpu : usr=96.92%, sys=2.76%, ctx=6, majf=0, minf=1637 00:45:57.055 IO depths : 1=0.1%, 2=0.4%, 4=68.2%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:57.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.055 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.055 issued rwts: total=9354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:57.055 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:57.055 filename1: (groupid=0, jobs=1): err= 0: pid=2915345: Sat Dec 7 11:57:55 2024 00:45:57.055 read: IOPS=1833, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5003msec) 00:45:57.055 slat (nsec): min=6010, max=42602, avg=7667.77, stdev=2214.10 00:45:57.055 clat (usec): min=2215, max=7448, avg=4340.09, stdev=602.59 00:45:57.055 lat (usec): min=2222, max=7491, avg=4347.76, stdev=602.40 00:45:57.055 clat percentiles (usec): 00:45:57.055 | 1.00th=[ 3195], 5.00th=[ 3490], 10.00th=[ 3818], 20.00th=[ 4146], 00:45:57.055 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4293], 00:45:57.055 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4686], 95.00th=[ 5997], 00:45:57.055 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 7111], 99.95th=[ 7111], 00:45:57.055 | 99.99th=[ 7439] 00:45:57.055 bw ( KiB/s): min=14256, max=16080, per=24.83%, avg=14674.90, stdev=531.54, samples=10 00:45:57.055 iops : min= 1782, max= 2010, avg=1834.30, stdev=66.42, samples=10 00:45:57.055 lat (msec) : 4=13.67%, 10=86.33% 00:45:57.055 cpu : usr=97.06%, sys=2.68%, ctx=6, majf=0, minf=1634 00:45:57.055 IO depths : 1=0.1%, 2=1.0%, 4=70.8%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:57.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.055 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.055 issued rwts: total=9175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:57.055 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:57.055 filename1: (groupid=0, jobs=1): err= 0: pid=2915347: Sat Dec 7 11:57:55 2024 00:45:57.055 read: IOPS=1810, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5002msec) 00:45:57.055 slat (nsec): min=6023, max=37630, avg=9170.27, stdev=2205.57 00:45:57.055 clat (usec): min=2982, max=46852, avg=4392.46, stdev=1320.46 00:45:57.055 lat (usec): min=2989, max=46890, avg=4401.63, stdev=1320.51 00:45:57.055 clat percentiles (usec): 00:45:57.055 | 1.00th=[ 3752], 5.00th=[ 4015], 10.00th=[ 4047], 20.00th=[ 4228], 00:45:57.055 | 30.00th=[ 4293], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4293], 00:45:57.055 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4686], 95.00th=[ 4948], 00:45:57.055 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 6849], 99.95th=[46924], 00:45:57.055 | 99.99th=[46924] 00:45:57.055 bw ( KiB/s): min=13360, max=14896, per=24.52%, avg=14488.89, stdev=484.06, samples=9 00:45:57.055 iops : min= 1670, max= 1862, avg=1811.11, stdev=60.51, samples=9 00:45:57.055 lat (msec) : 4=3.96%, 10=95.95%, 50=0.09% 00:45:57.055 cpu : usr=96.86%, sys=2.84%, ctx=8, majf=0, minf=1636 00:45:57.055 IO depths : 1=0.1%, 2=0.1%, 4=73.3%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:57.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.055 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.055 issued rwts: total=9055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:57.055 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:57.055 00:45:57.055 Run status group 0 (all jobs): 00:45:57.055 READ: bw=57.7MiB/s (60.5MB/s), 14.1MiB/s-14.6MiB/s (14.8MB/s-15.3MB/s), io=289MiB (303MB), run=5002-5003msec 00:45:57.314 ----------------------------------------------------- 00:45:57.315 Suppressions used: 00:45:57.315 count bytes template 00:45:57.315 6 52 /usr/src/fio/parse.c 00:45:57.315 1 8 libtcmalloc_minimal.so 00:45:57.315 1 904 libcrypto.so 00:45:57.315 ----------------------------------------------------- 00:45:57.315 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.575 00:45:57.575 real 0m27.738s 00:45:57.575 user 5m16.778s 00:45:57.575 sys 0m5.153s 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:57.575 11:57:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:57.575 ************************************ 00:45:57.575 END TEST fio_dif_rand_params 00:45:57.575 ************************************ 00:45:57.575 11:57:56 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:57.575 11:57:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:57.575 11:57:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:57.575 11:57:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:57.575 ************************************ 00:45:57.575 START TEST fio_dif_digest 00:45:57.575 ************************************ 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:57.575 bdev_null0 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:57.575 [2024-12-07 11:57:56.834089] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.575 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:57.576 { 00:45:57.576 "params": { 00:45:57.576 "name": "Nvme$subsystem", 00:45:57.576 "trtype": "$TEST_TRANSPORT", 00:45:57.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:57.576 "adrfam": "ipv4", 00:45:57.576 "trsvcid": "$NVMF_PORT", 00:45:57.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:57.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:57.576 "hdgst": ${hdgst:-false}, 00:45:57.576 "ddgst": ${ddgst:-false} 00:45:57.576 }, 00:45:57.576 "method": "bdev_nvme_attach_controller" 00:45:57.576 } 00:45:57.576 EOF 00:45:57.576 )") 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:57.576 "params": { 00:45:57.576 "name": "Nvme0", 00:45:57.576 "trtype": "tcp", 00:45:57.576 "traddr": "10.0.0.2", 00:45:57.576 "adrfam": "ipv4", 00:45:57.576 "trsvcid": "4420", 00:45:57.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:57.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:57.576 "hdgst": true, 00:45:57.576 "ddgst": true 00:45:57.576 }, 00:45:57.576 "method": "bdev_nvme_attach_controller" 00:45:57.576 }' 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:57.576 11:57:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:58.177 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:58.177 ... 00:45:58.177 fio-3.35 00:45:58.177 Starting 3 threads 00:46:10.411 00:46:10.411 filename0: (groupid=0, jobs=1): err= 0: pid=2916898: Sat Dec 7 11:58:08 2024 00:46:10.411 read: IOPS=216, BW=27.0MiB/s (28.3MB/s)(271MiB/10049msec) 00:46:10.411 slat (nsec): min=6473, max=66466, avg=10834.74, stdev=2062.31 00:46:10.411 clat (usec): min=8401, max=54058, avg=13851.78, stdev=2547.31 00:46:10.411 lat (usec): min=8410, max=54068, avg=13862.62, stdev=2547.57 00:46:10.411 clat percentiles (usec): 00:46:10.411 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10421], 20.00th=[11207], 00:46:10.411 | 30.00th=[12518], 40.00th=[13829], 50.00th=[14353], 60.00th=[14746], 00:46:10.411 | 70.00th=[15270], 80.00th=[15795], 90.00th=[16450], 95.00th=[16909], 00:46:10.411 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19530], 99.95th=[49546], 00:46:10.411 | 99.99th=[54264] 00:46:10.411 bw ( KiB/s): min=24832, max=29696, per=39.19%, avg=27763.20, stdev=1395.94, samples=20 00:46:10.411 iops : min= 194, max= 232, avg=216.90, stdev=10.91, samples=20 00:46:10.411 lat (msec) : 10=4.97%, 20=94.93%, 50=0.05%, 100=0.05% 00:46:10.411 cpu : usr=94.56%, sys=5.17%, ctx=19, majf=0, minf=1634 00:46:10.411 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:10.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:10.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:10.411 issued rwts: total=2171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:10.411 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:10.411 filename0: (groupid=0, jobs=1): err= 0: pid=2916899: Sat Dec 7 11:58:08 2024 00:46:10.411 read: IOPS=200, BW=25.1MiB/s (26.3MB/s)(252MiB/10047msec) 00:46:10.411 slat (nsec): min=6502, max=46745, avg=10698.12, stdev=1917.36 00:46:10.411 clat (usec): min=9067, max=58784, avg=14916.24, stdev=3021.88 00:46:10.411 lat (usec): min=9076, max=58794, avg=14926.94, stdev=3022.01 00:46:10.411 clat percentiles (usec): 00:46:10.411 | 1.00th=[10159], 5.00th=[10945], 10.00th=[11338], 20.00th=[12256], 00:46:10.411 | 30.00th=[13698], 40.00th=[14877], 50.00th=[15401], 60.00th=[15926], 00:46:10.411 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17433], 95.00th=[17957], 00:46:10.411 | 99.00th=[19268], 99.50th=[19530], 99.90th=[56886], 99.95th=[58459], 00:46:10.411 | 99.99th=[58983] 00:46:10.411 bw ( KiB/s): min=24064, max=27136, per=36.39%, avg=25779.20, stdev=1010.78, samples=20 00:46:10.411 iops : min= 188, max= 212, avg=201.40, stdev= 7.90, samples=20 00:46:10.411 lat (msec) : 10=0.55%, 20=99.06%, 50=0.15%, 100=0.25% 00:46:10.411 cpu : usr=95.18%, sys=4.55%, ctx=16, majf=0, minf=1634 00:46:10.411 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:10.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:10.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:10.411 issued rwts: total=2016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:10.411 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:10.411 filename0: (groupid=0, jobs=1): err= 0: pid=2916901: Sat Dec 7 11:58:08 2024 00:46:10.411 read: IOPS=136, BW=17.1MiB/s (17.9MB/s)(172MiB/10046msec) 00:46:10.411 slat (nsec): min=6752, max=44581, avg=10690.71, stdev=1993.24 00:46:10.411 clat (usec): min=11067, max=98255, avg=21888.73, stdev=14257.82 00:46:10.411 lat (usec): min=11077, max=98265, avg=21899.42, stdev=14257.67 00:46:10.411 clat percentiles (usec): 00:46:10.411 | 1.00th=[13435], 5.00th=[14746], 10.00th=[15139], 20.00th=[15533], 00:46:10.411 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16581], 60.00th=[17171], 00:46:10.411 | 70.00th=[17433], 80.00th=[18220], 90.00th=[55837], 95.00th=[57410], 00:46:10.411 | 99.00th=[58983], 99.50th=[60556], 99.90th=[98042], 99.95th=[98042], 00:46:10.411 | 99.99th=[98042] 00:46:10.411 bw ( KiB/s): min=14336, max=22016, per=24.79%, avg=17559.55, stdev=2149.48, samples=20 00:46:10.411 iops : min= 112, max= 172, avg=137.15, stdev=16.75, samples=20 00:46:10.411 lat (msec) : 20=86.68%, 50=0.22%, 100=13.10% 00:46:10.411 cpu : usr=95.72%, sys=4.02%, ctx=19, majf=0, minf=1639 00:46:10.411 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:10.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:10.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:10.411 issued rwts: total=1374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:10.411 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:10.411 00:46:10.411 Run status group 0 (all jobs): 00:46:10.411 READ: bw=69.2MiB/s (72.5MB/s), 17.1MiB/s-27.0MiB/s (17.9MB/s-28.3MB/s), io=695MiB (729MB), run=10046-10049msec 00:46:10.411 ----------------------------------------------------- 00:46:10.411 Suppressions used: 00:46:10.411 count bytes template 00:46:10.411 5 44 /usr/src/fio/parse.c 00:46:10.411 1 8 libtcmalloc_minimal.so 00:46:10.411 1 904 libcrypto.so 00:46:10.411 ----------------------------------------------------- 00:46:10.411 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.411 00:46:10.411 real 0m12.346s 00:46:10.411 user 0m41.524s 00:46:10.411 sys 0m1.988s 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:10.411 11:58:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:10.411 ************************************ 00:46:10.411 END TEST fio_dif_digest 00:46:10.411 ************************************ 00:46:10.411 11:58:09 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:46:10.411 11:58:09 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:46:10.411 11:58:09 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:10.411 11:58:09 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:46:10.411 11:58:09 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:10.411 11:58:09 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:46:10.411 11:58:09 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:10.411 11:58:09 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:10.411 rmmod nvme_tcp 00:46:10.411 rmmod nvme_fabrics 00:46:10.411 rmmod nvme_keyring 00:46:10.411 11:58:09 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:10.411 11:58:09 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:46:10.411 11:58:09 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:46:10.411 11:58:09 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2905100 ']' 00:46:10.411 11:58:09 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2905100 00:46:10.411 11:58:09 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2905100 ']' 00:46:10.411 11:58:09 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2905100 00:46:10.411 11:58:09 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:46:10.411 11:58:09 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:10.411 11:58:09 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2905100 00:46:10.411 11:58:09 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:10.411 11:58:09 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:10.411 11:58:09 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2905100' 00:46:10.411 killing process with pid 2905100 00:46:10.411 11:58:09 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2905100 00:46:10.411 11:58:09 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2905100 00:46:10.983 11:58:10 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:46:10.983 11:58:10 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:14.283 Waiting for block devices as requested 00:46:14.283 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:14.283 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:14.283 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:14.283 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:14.283 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:14.283 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:14.543 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:14.543 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:14.543 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:14.803 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:14.803 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:14.803 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:15.064 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:15.064 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:15.064 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:15.064 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:15.324 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:15.584 11:58:14 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:15.584 11:58:14 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:15.584 11:58:14 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:46:15.584 11:58:14 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:46:15.584 11:58:14 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:15.584 11:58:14 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:46:15.584 11:58:14 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:15.584 11:58:14 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:15.584 11:58:14 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:15.584 11:58:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:15.584 11:58:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:17.505 11:58:16 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:17.505 00:46:17.505 real 1m24.890s 00:46:17.505 user 8m7.651s 00:46:17.505 sys 0m22.437s 00:46:17.505 11:58:16 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:17.505 11:58:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:17.505 ************************************ 00:46:17.505 END TEST nvmf_dif 00:46:17.505 ************************************ 00:46:17.765 11:58:16 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:17.765 11:58:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:17.765 11:58:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:17.765 11:58:16 -- common/autotest_common.sh@10 -- # set +x 00:46:17.765 ************************************ 00:46:17.765 START TEST nvmf_abort_qd_sizes 00:46:17.765 ************************************ 00:46:17.765 11:58:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:17.765 * Looking for test storage... 00:46:17.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:46:17.765 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:18.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:18.026 --rc genhtml_branch_coverage=1 00:46:18.026 --rc genhtml_function_coverage=1 00:46:18.026 --rc genhtml_legend=1 00:46:18.026 --rc geninfo_all_blocks=1 00:46:18.026 --rc geninfo_unexecuted_blocks=1 00:46:18.026 00:46:18.026 ' 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:18.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:18.026 --rc genhtml_branch_coverage=1 00:46:18.026 --rc genhtml_function_coverage=1 00:46:18.026 --rc genhtml_legend=1 00:46:18.026 --rc geninfo_all_blocks=1 00:46:18.026 --rc geninfo_unexecuted_blocks=1 00:46:18.026 00:46:18.026 ' 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:18.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:18.026 --rc genhtml_branch_coverage=1 00:46:18.026 --rc genhtml_function_coverage=1 00:46:18.026 --rc genhtml_legend=1 00:46:18.026 --rc geninfo_all_blocks=1 00:46:18.026 --rc geninfo_unexecuted_blocks=1 00:46:18.026 00:46:18.026 ' 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:18.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:18.026 --rc genhtml_branch_coverage=1 00:46:18.026 --rc genhtml_function_coverage=1 00:46:18.026 --rc genhtml_legend=1 00:46:18.026 --rc geninfo_all_blocks=1 00:46:18.026 --rc geninfo_unexecuted_blocks=1 00:46:18.026 00:46:18.026 ' 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:18.026 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:18.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:46:18.027 11:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:46:26.180 Found 0000:31:00.0 (0x8086 - 0x159b) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:46:26.180 Found 0000:31:00.1 (0x8086 - 0x159b) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:46:26.180 Found net devices under 0000:31:00.0: cvl_0_0 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:46:26.180 Found net devices under 0000:31:00.1: cvl_0_1 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:26.180 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:26.181 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:26.181 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:26.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:26.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:46:26.181 00:46:26.181 --- 10.0.0.2 ping statistics --- 00:46:26.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:26.181 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:46:26.181 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:26.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:26.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:46:26.181 00:46:26.181 --- 10.0.0.1 ping statistics --- 00:46:26.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:26.181 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:46:26.181 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:26.181 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:46:26.181 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:46:26.181 11:58:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:28.727 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:28.727 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2926536 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2926536 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2926536 ']' 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:28.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:28.989 11:58:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:28.989 [2024-12-07 11:58:28.262710] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:46:28.989 [2024-12-07 11:58:28.262832] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:29.251 [2024-12-07 11:58:28.415083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:29.251 [2024-12-07 11:58:28.516830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:29.251 [2024-12-07 11:58:28.516873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:29.251 [2024-12-07 11:58:28.516884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:29.251 [2024-12-07 11:58:28.516896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:29.251 [2024-12-07 11:58:28.516905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:29.251 [2024-12-07 11:58:28.519146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:29.251 [2024-12-07 11:58:28.519278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:29.251 [2024-12-07 11:58:28.519405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:29.251 [2024-12-07 11:58:28.519428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:29.825 11:58:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:29.825 ************************************ 00:46:29.825 START TEST spdk_target_abort 00:46:29.825 ************************************ 00:46:29.825 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:46:29.825 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:46:29.825 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:46:29.825 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:29.825 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:30.397 spdk_targetn1 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:30.397 [2024-12-07 11:58:29.479755] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:30.397 [2024-12-07 11:58:29.520489] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:30.397 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:30.398 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:30.398 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:30.398 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:30.398 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:46:30.398 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:30.398 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:30.398 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:30.398 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:30.398 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:30.398 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:30.658 [2024-12-07 11:58:29.815367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:760 len:8 PRP1 0x200004ac3000 PRP2 0x0 00:46:30.658 [2024-12-07 11:58:29.815399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0060 p:1 m:0 dnr:0 00:46:30.658 [2024-12-07 11:58:29.855093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2080 len:8 PRP1 0x200004ac5000 PRP2 0x0 00:46:30.658 [2024-12-07 11:58:29.855121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:46:34.045 Initializing NVMe Controllers 00:46:34.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:34.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:34.045 Initialization complete. Launching workers. 00:46:34.045 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12173, failed: 2 00:46:34.045 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2968, failed to submit 9207 00:46:34.045 success 743, unsuccessful 2225, failed 0 00:46:34.045 11:58:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:34.045 11:58:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:34.045 [2024-12-07 11:58:33.133390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200004e4f000 PRP2 0x0 00:46:34.045 [2024-12-07 11:58:33.133436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:46:34.045 [2024-12-07 11:58:33.173312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:1208 len:8 PRP1 0x200004e4f000 PRP2 0x0 00:46:34.045 [2024-12-07 11:58:33.173347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:46:36.590 [2024-12-07 11:58:35.333030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:48840 len:8 PRP1 0x200004e49000 PRP2 0x0 00:46:36.590 [2024-12-07 11:58:35.333074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:00e3 p:0 m:0 dnr:0 00:46:36.590 [2024-12-07 11:58:35.432317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:50968 len:8 PRP1 0x200004e3b000 PRP2 0x0 00:46:36.590 [2024-12-07 11:58:35.432351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00e9 p:1 m:0 dnr:0 00:46:37.167 Initializing NVMe Controllers 00:46:37.167 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:37.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:37.167 Initialization complete. Launching workers. 00:46:37.167 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8375, failed: 4 00:46:37.167 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1214, failed to submit 7165 00:46:37.167 success 319, unsuccessful 895, failed 0 00:46:37.167 11:58:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:37.167 11:58:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:40.470 Initializing NVMe Controllers 00:46:40.470 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:40.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:40.470 Initialization complete. Launching workers. 00:46:40.470 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37931, failed: 0 00:46:40.470 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2823, failed to submit 35108 00:46:40.470 success 592, unsuccessful 2231, failed 0 00:46:40.470 11:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:40.470 11:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:40.470 11:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:40.470 11:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:40.470 11:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:40.470 11:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:40.470 11:58:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:42.386 11:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.386 11:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2926536 00:46:42.386 11:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2926536 ']' 00:46:42.386 11:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2926536 00:46:42.386 11:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:46:42.386 11:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:42.386 11:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2926536 00:46:42.386 11:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:42.386 11:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:42.386 11:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2926536' 00:46:42.386 killing process with pid 2926536 00:46:42.386 11:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2926536 00:46:42.386 11:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2926536 00:46:42.957 00:46:42.957 real 0m12.968s 00:46:42.957 user 0m51.778s 00:46:42.957 sys 0m2.126s 00:46:42.957 11:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:42.957 11:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:42.957 ************************************ 00:46:42.957 END TEST spdk_target_abort 00:46:42.957 ************************************ 00:46:42.957 11:58:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:42.957 11:58:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:42.957 11:58:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:42.957 11:58:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:42.957 ************************************ 00:46:42.957 START TEST kernel_target_abort 00:46:42.957 ************************************ 00:46:42.957 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:46:42.957 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:42.958 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:46.263 Waiting for block devices as requested 00:46:46.263 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:46.263 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:46.524 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:46.524 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:46.524 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:46.785 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:46.785 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:46.785 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:47.046 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:47.046 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:47.307 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:47.307 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:47.307 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:47.307 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:47.568 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:47.568 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:47.568 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:48.510 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:46:48.510 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:48.510 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:46:48.510 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:48.511 No valid GPT data, bailing 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:48.511 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:46:48.772 00:46:48.772 Discovery Log Number of Records 2, Generation counter 2 00:46:48.772 =====Discovery Log Entry 0====== 00:46:48.772 trtype: tcp 00:46:48.772 adrfam: ipv4 00:46:48.772 subtype: current discovery subsystem 00:46:48.772 treq: not specified, sq flow control disable supported 00:46:48.772 portid: 1 00:46:48.772 trsvcid: 4420 00:46:48.772 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:48.772 traddr: 10.0.0.1 00:46:48.772 eflags: none 00:46:48.772 sectype: none 00:46:48.772 =====Discovery Log Entry 1====== 00:46:48.772 trtype: tcp 00:46:48.772 adrfam: ipv4 00:46:48.772 subtype: nvme subsystem 00:46:48.772 treq: not specified, sq flow control disable supported 00:46:48.772 portid: 1 00:46:48.772 trsvcid: 4420 00:46:48.772 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:48.772 traddr: 10.0.0.1 00:46:48.772 eflags: none 00:46:48.772 sectype: none 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:48.772 11:58:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:52.076 Initializing NVMe Controllers 00:46:52.077 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:52.077 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:52.077 Initialization complete. Launching workers. 00:46:52.077 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61068, failed: 0 00:46:52.077 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 61068, failed to submit 0 00:46:52.077 success 0, unsuccessful 61068, failed 0 00:46:52.077 11:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:52.077 11:58:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:55.381 Initializing NVMe Controllers 00:46:55.381 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:55.381 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:55.381 Initialization complete. Launching workers. 00:46:55.381 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97546, failed: 0 00:46:55.381 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24574, failed to submit 72972 00:46:55.381 success 0, unsuccessful 24574, failed 0 00:46:55.381 11:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:55.381 11:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:58.681 Initializing NVMe Controllers 00:46:58.681 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:58.681 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:58.681 Initialization complete. Launching workers. 00:46:58.681 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92561, failed: 0 00:46:58.681 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23126, failed to submit 69435 00:46:58.681 success 0, unsuccessful 23126, failed 0 00:46:58.681 11:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:58.681 11:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:58.681 11:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:46:58.681 11:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:58.681 11:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:58.681 11:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:58.681 11:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:58.681 11:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:46:58.681 11:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:46:58.681 11:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:47:01.246 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:01.246 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:01.246 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:01.246 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:01.247 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:01.247 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:01.247 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:01.247 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:01.247 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:01.247 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:01.247 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:01.247 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:01.247 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:01.247 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:01.247 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:01.247 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:03.161 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:47:03.421 00:47:03.421 real 0m20.365s 00:47:03.421 user 0m9.784s 00:47:03.421 sys 0m6.303s 00:47:03.421 11:59:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:03.421 11:59:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:03.421 ************************************ 00:47:03.421 END TEST kernel_target_abort 00:47:03.421 ************************************ 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:03.421 rmmod nvme_tcp 00:47:03.421 rmmod nvme_fabrics 00:47:03.421 rmmod nvme_keyring 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2926536 ']' 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2926536 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2926536 ']' 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2926536 00:47:03.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2926536) - No such process 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2926536 is not found' 00:47:03.421 Process with pid 2926536 is not found 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:47:03.421 11:59:02 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:06.896 Waiting for block devices as requested 00:47:06.896 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:06.896 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:06.896 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:06.896 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:06.896 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:07.208 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:07.208 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:07.208 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:07.208 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:47:07.469 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:07.469 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:07.469 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:07.730 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:07.730 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:07.730 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:07.991 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:07.991 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:08.251 11:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:08.251 11:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:08.251 11:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:47:08.251 11:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:47:08.251 11:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:08.251 11:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:47:08.251 11:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:08.251 11:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:08.251 11:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:08.251 11:59:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:08.251 11:59:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:10.790 11:59:09 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:10.790 00:47:10.790 real 0m52.599s 00:47:10.790 user 1m6.877s 00:47:10.790 sys 0m19.038s 00:47:10.790 11:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:10.790 11:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:10.790 ************************************ 00:47:10.790 END TEST nvmf_abort_qd_sizes 00:47:10.790 ************************************ 00:47:10.790 11:59:09 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:47:10.790 11:59:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:47:10.790 11:59:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:10.790 11:59:09 -- common/autotest_common.sh@10 -- # set +x 00:47:10.790 ************************************ 00:47:10.790 START TEST keyring_file 00:47:10.790 ************************************ 00:47:10.790 11:59:09 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:47:10.790 * Looking for test storage... 00:47:10.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:10.790 11:59:09 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:10.790 11:59:09 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:47:10.790 11:59:09 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:10.790 11:59:09 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@345 -- # : 1 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@353 -- # local d=1 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@355 -- # echo 1 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@353 -- # local d=2 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@355 -- # echo 2 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@368 -- # return 0 00:47:10.790 11:59:09 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:10.790 11:59:09 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:10.790 --rc genhtml_branch_coverage=1 00:47:10.790 --rc genhtml_function_coverage=1 00:47:10.790 --rc genhtml_legend=1 00:47:10.790 --rc geninfo_all_blocks=1 00:47:10.790 --rc geninfo_unexecuted_blocks=1 00:47:10.790 00:47:10.790 ' 00:47:10.790 11:59:09 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:10.790 --rc genhtml_branch_coverage=1 00:47:10.790 --rc genhtml_function_coverage=1 00:47:10.790 --rc genhtml_legend=1 00:47:10.790 --rc geninfo_all_blocks=1 00:47:10.790 --rc geninfo_unexecuted_blocks=1 00:47:10.790 00:47:10.790 ' 00:47:10.790 11:59:09 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:10.790 --rc genhtml_branch_coverage=1 00:47:10.790 --rc genhtml_function_coverage=1 00:47:10.790 --rc genhtml_legend=1 00:47:10.790 --rc geninfo_all_blocks=1 00:47:10.790 --rc geninfo_unexecuted_blocks=1 00:47:10.790 00:47:10.790 ' 00:47:10.790 11:59:09 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:10.790 --rc genhtml_branch_coverage=1 00:47:10.790 --rc genhtml_function_coverage=1 00:47:10.790 --rc genhtml_legend=1 00:47:10.790 --rc geninfo_all_blocks=1 00:47:10.790 --rc geninfo_unexecuted_blocks=1 00:47:10.790 00:47:10.790 ' 00:47:10.790 11:59:09 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:10.790 11:59:09 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:10.790 11:59:09 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:10.790 11:59:09 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:10.790 11:59:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.790 11:59:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.791 11:59:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.791 11:59:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:47:10.791 11:59:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@51 -- # : 0 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:10.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:10.791 11:59:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:10.791 11:59:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:10.791 11:59:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:47:10.791 11:59:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:47:10.791 11:59:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:47:10.791 11:59:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7NDCQy2yns 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7NDCQy2yns 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7NDCQy2yns 00:47:10.791 11:59:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.7NDCQy2yns 00:47:10.791 11:59:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yt47e8rGBR 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:47:10.791 11:59:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yt47e8rGBR 00:47:10.791 11:59:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yt47e8rGBR 00:47:10.791 11:59:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.yt47e8rGBR 00:47:10.791 11:59:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=2937201 00:47:10.791 11:59:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2937201 00:47:10.791 11:59:09 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:10.791 11:59:09 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2937201 ']' 00:47:10.791 11:59:09 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:10.791 11:59:09 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:10.791 11:59:09 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:10.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:10.791 11:59:09 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:10.791 11:59:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:10.791 [2024-12-07 11:59:10.052121] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:47:10.791 [2024-12-07 11:59:10.052223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937201 ] 00:47:11.050 [2024-12-07 11:59:10.166406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:11.050 [2024-12-07 11:59:10.262924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:11.618 11:59:10 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:11.618 11:59:10 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:47:11.618 11:59:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:47:11.618 11:59:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:11.618 11:59:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:11.618 [2024-12-07 11:59:10.916200] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:11.618 null0 00:47:11.618 [2024-12-07 11:59:10.948213] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:11.618 [2024-12-07 11:59:10.948671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:11.618 11:59:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:11.618 11:59:10 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:11.877 11:59:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:47:11.877 11:59:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:11.877 11:59:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:47:11.877 11:59:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:11.878 [2024-12-07 11:59:10.980277] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:47:11.878 request: 00:47:11.878 { 00:47:11.878 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:47:11.878 "secure_channel": false, 00:47:11.878 "listen_address": { 00:47:11.878 "trtype": "tcp", 00:47:11.878 "traddr": "127.0.0.1", 00:47:11.878 "trsvcid": "4420" 00:47:11.878 }, 00:47:11.878 "method": "nvmf_subsystem_add_listener", 00:47:11.878 "req_id": 1 00:47:11.878 } 00:47:11.878 Got JSON-RPC error response 00:47:11.878 response: 00:47:11.878 { 00:47:11.878 "code": -32602, 00:47:11.878 "message": "Invalid parameters" 00:47:11.878 } 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:11.878 11:59:10 keyring_file -- keyring/file.sh@47 -- # bperfpid=2937372 00:47:11.878 11:59:10 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2937372 /var/tmp/bperf.sock 00:47:11.878 11:59:10 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2937372 ']' 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:11.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:11.878 11:59:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:11.878 [2024-12-07 11:59:11.071693] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:47:11.878 [2024-12-07 11:59:11.071800] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937372 ] 00:47:11.878 [2024-12-07 11:59:11.179624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:12.137 [2024-12-07 11:59:11.254296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:12.706 11:59:11 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:12.706 11:59:11 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:47:12.706 11:59:11 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7NDCQy2yns 00:47:12.706 11:59:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7NDCQy2yns 00:47:12.706 11:59:12 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yt47e8rGBR 00:47:12.706 11:59:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yt47e8rGBR 00:47:12.967 11:59:12 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:47:12.967 11:59:12 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:47:12.967 11:59:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:12.967 11:59:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:12.967 11:59:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:13.227 11:59:12 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.7NDCQy2yns == \/\t\m\p\/\t\m\p\.\7\N\D\C\Q\y\2\y\n\s ]] 00:47:13.227 11:59:12 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:47:13.227 11:59:12 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:47:13.227 11:59:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:13.227 11:59:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:13.227 11:59:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:13.227 11:59:12 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.yt47e8rGBR == \/\t\m\p\/\t\m\p\.\y\t\4\7\e\8\r\G\B\R ]] 00:47:13.227 11:59:12 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:47:13.227 11:59:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:13.227 11:59:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:13.227 11:59:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:13.227 11:59:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:13.227 11:59:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:13.488 11:59:12 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:47:13.488 11:59:12 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:47:13.488 11:59:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:13.488 11:59:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:13.488 11:59:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:13.488 11:59:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:13.488 11:59:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:13.748 11:59:12 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:47:13.748 11:59:12 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:13.748 11:59:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:13.748 [2024-12-07 11:59:13.015647] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:13.748 nvme0n1 00:47:14.009 11:59:13 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:47:14.009 11:59:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:14.009 11:59:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:14.009 11:59:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:14.009 11:59:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:14.009 11:59:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:14.009 11:59:13 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:47:14.009 11:59:13 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:47:14.009 11:59:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:14.009 11:59:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:14.009 11:59:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:14.009 11:59:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:14.009 11:59:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:14.269 11:59:13 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:47:14.269 11:59:13 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:14.269 Running I/O for 1 seconds... 00:47:15.470 14476.00 IOPS, 56.55 MiB/s 00:47:15.470 Latency(us) 00:47:15.470 [2024-12-07T10:59:14.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:15.470 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:47:15.470 nvme0n1 : 1.01 14477.57 56.55 0.00 0.00 8801.20 5051.73 13161.81 00:47:15.470 [2024-12-07T10:59:14.824Z] =================================================================================================================== 00:47:15.470 [2024-12-07T10:59:14.824Z] Total : 14477.57 56.55 0.00 0.00 8801.20 5051.73 13161.81 00:47:15.470 { 00:47:15.470 "results": [ 00:47:15.470 { 00:47:15.470 "job": "nvme0n1", 00:47:15.470 "core_mask": "0x2", 00:47:15.470 "workload": "randrw", 00:47:15.470 "percentage": 50, 00:47:15.470 "status": "finished", 00:47:15.470 "queue_depth": 128, 00:47:15.470 "io_size": 4096, 00:47:15.470 "runtime": 1.008802, 00:47:15.470 "iops": 14477.568442568512, 00:47:15.470 "mibps": 56.55300172878325, 00:47:15.470 "io_failed": 0, 00:47:15.470 "io_timeout": 0, 00:47:15.470 "avg_latency_us": 8801.202291452699, 00:47:15.470 "min_latency_us": 5051.733333333334, 00:47:15.470 "max_latency_us": 13161.813333333334 00:47:15.470 } 00:47:15.470 ], 00:47:15.470 "core_count": 1 00:47:15.470 } 00:47:15.470 11:59:14 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:15.470 11:59:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:15.470 11:59:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:47:15.470 11:59:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:15.470 11:59:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:15.470 11:59:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:15.470 11:59:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:15.470 11:59:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:15.730 11:59:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:47:15.730 11:59:14 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:47:15.730 11:59:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:15.730 11:59:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:15.731 11:59:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:15.731 11:59:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:15.731 11:59:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:15.993 11:59:15 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:47:15.993 11:59:15 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:15.993 11:59:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:47:15.993 11:59:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:15.993 11:59:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:15.993 11:59:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:15.993 11:59:15 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:15.993 11:59:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:15.993 11:59:15 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:15.993 11:59:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:15.993 [2024-12-07 11:59:15.283027] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:15.993 [2024-12-07 11:59:15.283998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (107): Transport endpoint is not connected 00:47:15.993 [2024-12-07 11:59:15.284982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:47:15.993 [2024-12-07 11:59:15.285981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:47:15.993 [2024-12-07 11:59:15.286003] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:15.993 [2024-12-07 11:59:15.286019] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:15.993 [2024-12-07 11:59:15.286029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:47:15.993 request: 00:47:15.993 { 00:47:15.993 "name": "nvme0", 00:47:15.993 "trtype": "tcp", 00:47:15.993 "traddr": "127.0.0.1", 00:47:15.993 "adrfam": "ipv4", 00:47:15.993 "trsvcid": "4420", 00:47:15.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:15.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:15.993 "prchk_reftag": false, 00:47:15.993 "prchk_guard": false, 00:47:15.993 "hdgst": false, 00:47:15.993 "ddgst": false, 00:47:15.993 "psk": "key1", 00:47:15.993 "allow_unrecognized_csi": false, 00:47:15.993 "method": "bdev_nvme_attach_controller", 00:47:15.993 "req_id": 1 00:47:15.993 } 00:47:15.993 Got JSON-RPC error response 00:47:15.993 response: 00:47:15.993 { 00:47:15.993 "code": -5, 00:47:15.993 "message": "Input/output error" 00:47:15.993 } 00:47:15.993 11:59:15 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:47:15.993 11:59:15 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:15.993 11:59:15 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:15.993 11:59:15 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:15.993 11:59:15 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:47:15.993 11:59:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:15.993 11:59:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:15.993 11:59:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:15.993 11:59:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:15.993 11:59:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:16.253 11:59:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:47:16.253 11:59:15 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:47:16.253 11:59:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:16.253 11:59:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:16.253 11:59:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:16.253 11:59:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:16.253 11:59:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:16.514 11:59:15 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:47:16.514 11:59:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:47:16.514 11:59:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:16.514 11:59:15 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:47:16.514 11:59:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:47:16.774 11:59:15 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:47:16.774 11:59:15 keyring_file -- keyring/file.sh@78 -- # jq length 00:47:16.774 11:59:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:17.035 11:59:16 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:47:17.035 11:59:16 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.7NDCQy2yns 00:47:17.035 11:59:16 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.7NDCQy2yns 00:47:17.035 11:59:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:47:17.035 11:59:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.7NDCQy2yns 00:47:17.035 11:59:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:17.035 11:59:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:17.035 11:59:16 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:17.035 11:59:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:17.035 11:59:16 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7NDCQy2yns 00:47:17.035 11:59:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7NDCQy2yns 00:47:17.035 [2024-12-07 11:59:16.287106] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.7NDCQy2yns': 0100660 00:47:17.035 [2024-12-07 11:59:16.287138] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:47:17.035 request: 00:47:17.035 { 00:47:17.035 "name": "key0", 00:47:17.035 "path": "/tmp/tmp.7NDCQy2yns", 00:47:17.035 "method": "keyring_file_add_key", 00:47:17.035 "req_id": 1 00:47:17.035 } 00:47:17.035 Got JSON-RPC error response 00:47:17.035 response: 00:47:17.035 { 00:47:17.035 "code": -1, 00:47:17.035 "message": "Operation not permitted" 00:47:17.035 } 00:47:17.035 11:59:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:47:17.035 11:59:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:17.035 11:59:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:17.035 11:59:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:17.035 11:59:16 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.7NDCQy2yns 00:47:17.036 11:59:16 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7NDCQy2yns 00:47:17.036 11:59:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7NDCQy2yns 00:47:17.296 11:59:16 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.7NDCQy2yns 00:47:17.296 11:59:16 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:47:17.296 11:59:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:17.296 11:59:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:17.296 11:59:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:17.296 11:59:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:17.296 11:59:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:17.296 11:59:16 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:47:17.296 11:59:16 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:17.297 11:59:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:47:17.297 11:59:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:17.297 11:59:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:17.297 11:59:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:17.297 11:59:16 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:17.297 11:59:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:17.297 11:59:16 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:17.297 11:59:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:17.557 [2024-12-07 11:59:16.780388] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.7NDCQy2yns': No such file or directory 00:47:17.557 [2024-12-07 11:59:16.780414] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:47:17.557 [2024-12-07 11:59:16.780430] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:47:17.557 [2024-12-07 11:59:16.780439] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:47:17.557 [2024-12-07 11:59:16.780447] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:47:17.557 [2024-12-07 11:59:16.780458] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:47:17.557 request: 00:47:17.557 { 00:47:17.557 "name": "nvme0", 00:47:17.557 "trtype": "tcp", 00:47:17.557 "traddr": "127.0.0.1", 00:47:17.557 "adrfam": "ipv4", 00:47:17.557 "trsvcid": "4420", 00:47:17.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:17.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:17.557 "prchk_reftag": false, 00:47:17.558 "prchk_guard": false, 00:47:17.558 "hdgst": false, 00:47:17.558 "ddgst": false, 00:47:17.558 "psk": "key0", 00:47:17.558 "allow_unrecognized_csi": false, 00:47:17.558 "method": "bdev_nvme_attach_controller", 00:47:17.558 "req_id": 1 00:47:17.558 } 00:47:17.558 Got JSON-RPC error response 00:47:17.558 response: 00:47:17.558 { 00:47:17.558 "code": -19, 00:47:17.558 "message": "No such device" 00:47:17.558 } 00:47:17.558 11:59:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:47:17.558 11:59:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:17.558 11:59:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:17.558 11:59:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:17.558 11:59:16 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:47:17.558 11:59:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:17.819 11:59:16 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:17.819 11:59:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:17.819 11:59:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:17.819 11:59:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:17.819 11:59:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:17.819 11:59:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:17.819 11:59:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.taqt0k9nSf 00:47:17.819 11:59:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:17.819 11:59:16 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:17.819 11:59:16 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:47:17.819 11:59:16 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:17.819 11:59:16 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:17.819 11:59:16 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:47:17.819 11:59:16 keyring_file -- nvmf/common.sh@733 -- # python - 00:47:17.819 11:59:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.taqt0k9nSf 00:47:17.819 11:59:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.taqt0k9nSf 00:47:17.819 11:59:17 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.taqt0k9nSf 00:47:17.819 11:59:17 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.taqt0k9nSf 00:47:17.819 11:59:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.taqt0k9nSf 00:47:18.080 11:59:17 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:18.080 11:59:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:18.080 nvme0n1 00:47:18.080 11:59:17 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:47:18.080 11:59:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:18.080 11:59:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:18.080 11:59:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:18.080 11:59:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:18.080 11:59:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:18.340 11:59:17 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:47:18.340 11:59:17 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:47:18.340 11:59:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:18.600 11:59:17 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:47:18.600 11:59:17 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:47:18.600 11:59:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:18.600 11:59:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:18.600 11:59:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:18.600 11:59:17 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:47:18.601 11:59:17 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:47:18.601 11:59:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:18.601 11:59:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:18.601 11:59:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:18.601 11:59:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:18.601 11:59:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:18.861 11:59:18 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:47:18.861 11:59:18 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:18.861 11:59:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:19.121 11:59:18 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:47:19.121 11:59:18 keyring_file -- keyring/file.sh@105 -- # jq length 00:47:19.121 11:59:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:19.121 11:59:18 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:47:19.121 11:59:18 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.taqt0k9nSf 00:47:19.382 11:59:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.taqt0k9nSf 00:47:19.382 11:59:18 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yt47e8rGBR 00:47:19.382 11:59:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yt47e8rGBR 00:47:19.643 11:59:18 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:19.643 11:59:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:19.904 nvme0n1 00:47:19.905 11:59:19 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:47:19.905 11:59:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:47:20.166 11:59:19 keyring_file -- keyring/file.sh@113 -- # config='{ 00:47:20.166 "subsystems": [ 00:47:20.166 { 00:47:20.166 "subsystem": "keyring", 00:47:20.166 "config": [ 00:47:20.166 { 00:47:20.166 "method": "keyring_file_add_key", 00:47:20.166 "params": { 00:47:20.166 "name": "key0", 00:47:20.166 "path": "/tmp/tmp.taqt0k9nSf" 00:47:20.166 } 00:47:20.166 }, 00:47:20.166 { 00:47:20.166 "method": "keyring_file_add_key", 00:47:20.166 "params": { 00:47:20.166 "name": "key1", 00:47:20.166 "path": "/tmp/tmp.yt47e8rGBR" 00:47:20.166 } 00:47:20.166 } 00:47:20.166 ] 00:47:20.166 }, 00:47:20.166 { 00:47:20.166 "subsystem": "iobuf", 00:47:20.166 "config": [ 00:47:20.166 { 00:47:20.166 "method": "iobuf_set_options", 00:47:20.166 "params": { 00:47:20.166 "small_pool_count": 8192, 00:47:20.166 "large_pool_count": 1024, 00:47:20.166 "small_bufsize": 8192, 00:47:20.166 "large_bufsize": 135168, 00:47:20.166 "enable_numa": false 00:47:20.166 } 00:47:20.166 } 00:47:20.166 ] 00:47:20.166 }, 00:47:20.166 { 00:47:20.166 "subsystem": "sock", 00:47:20.166 "config": [ 00:47:20.166 { 00:47:20.166 "method": "sock_set_default_impl", 00:47:20.166 "params": { 00:47:20.166 "impl_name": "posix" 00:47:20.166 } 00:47:20.166 }, 00:47:20.166 { 00:47:20.166 "method": "sock_impl_set_options", 00:47:20.166 "params": { 00:47:20.166 "impl_name": "ssl", 00:47:20.166 "recv_buf_size": 4096, 00:47:20.166 "send_buf_size": 4096, 00:47:20.166 "enable_recv_pipe": true, 00:47:20.166 "enable_quickack": false, 00:47:20.166 "enable_placement_id": 0, 00:47:20.166 "enable_zerocopy_send_server": true, 00:47:20.166 "enable_zerocopy_send_client": false, 00:47:20.166 "zerocopy_threshold": 0, 00:47:20.166 "tls_version": 0, 00:47:20.166 "enable_ktls": false 00:47:20.166 } 00:47:20.166 }, 00:47:20.166 { 00:47:20.166 "method": "sock_impl_set_options", 00:47:20.166 "params": { 00:47:20.166 "impl_name": "posix", 00:47:20.166 "recv_buf_size": 2097152, 00:47:20.166 "send_buf_size": 2097152, 00:47:20.166 "enable_recv_pipe": true, 00:47:20.166 "enable_quickack": false, 00:47:20.166 "enable_placement_id": 0, 00:47:20.166 "enable_zerocopy_send_server": true, 00:47:20.166 "enable_zerocopy_send_client": false, 00:47:20.166 "zerocopy_threshold": 0, 00:47:20.166 "tls_version": 0, 00:47:20.166 "enable_ktls": false 00:47:20.166 } 00:47:20.166 } 00:47:20.166 ] 00:47:20.167 }, 00:47:20.167 { 00:47:20.167 "subsystem": "vmd", 00:47:20.167 "config": [] 00:47:20.167 }, 00:47:20.167 { 00:47:20.167 "subsystem": "accel", 00:47:20.167 "config": [ 00:47:20.167 { 00:47:20.167 "method": "accel_set_options", 00:47:20.167 "params": { 00:47:20.167 "small_cache_size": 128, 00:47:20.167 "large_cache_size": 16, 00:47:20.167 "task_count": 2048, 00:47:20.167 "sequence_count": 2048, 00:47:20.167 "buf_count": 2048 00:47:20.167 } 00:47:20.167 } 00:47:20.167 ] 00:47:20.167 }, 00:47:20.167 { 00:47:20.167 "subsystem": "bdev", 00:47:20.167 "config": [ 00:47:20.167 { 00:47:20.167 "method": "bdev_set_options", 00:47:20.167 "params": { 00:47:20.167 "bdev_io_pool_size": 65535, 00:47:20.167 "bdev_io_cache_size": 256, 00:47:20.167 "bdev_auto_examine": true, 00:47:20.167 "iobuf_small_cache_size": 128, 00:47:20.167 "iobuf_large_cache_size": 16 00:47:20.167 } 00:47:20.167 }, 00:47:20.167 { 00:47:20.167 "method": "bdev_raid_set_options", 00:47:20.167 "params": { 00:47:20.167 "process_window_size_kb": 1024, 00:47:20.167 "process_max_bandwidth_mb_sec": 0 00:47:20.167 } 00:47:20.167 }, 00:47:20.167 { 00:47:20.167 "method": "bdev_iscsi_set_options", 00:47:20.167 "params": { 00:47:20.167 "timeout_sec": 30 00:47:20.167 } 00:47:20.167 }, 00:47:20.167 { 00:47:20.167 "method": "bdev_nvme_set_options", 00:47:20.167 "params": { 00:47:20.167 "action_on_timeout": "none", 00:47:20.167 "timeout_us": 0, 00:47:20.167 "timeout_admin_us": 0, 00:47:20.167 "keep_alive_timeout_ms": 10000, 00:47:20.167 "arbitration_burst": 0, 00:47:20.167 "low_priority_weight": 0, 00:47:20.167 "medium_priority_weight": 0, 00:47:20.167 "high_priority_weight": 0, 00:47:20.167 "nvme_adminq_poll_period_us": 10000, 00:47:20.167 "nvme_ioq_poll_period_us": 0, 00:47:20.167 "io_queue_requests": 512, 00:47:20.167 "delay_cmd_submit": true, 00:47:20.167 "transport_retry_count": 4, 00:47:20.167 "bdev_retry_count": 3, 00:47:20.167 "transport_ack_timeout": 0, 00:47:20.167 "ctrlr_loss_timeout_sec": 0, 00:47:20.167 "reconnect_delay_sec": 0, 00:47:20.167 "fast_io_fail_timeout_sec": 0, 00:47:20.167 "disable_auto_failback": false, 00:47:20.167 "generate_uuids": false, 00:47:20.167 "transport_tos": 0, 00:47:20.167 "nvme_error_stat": false, 00:47:20.167 "rdma_srq_size": 0, 00:47:20.167 "io_path_stat": false, 00:47:20.167 "allow_accel_sequence": false, 00:47:20.167 "rdma_max_cq_size": 0, 00:47:20.167 "rdma_cm_event_timeout_ms": 0, 00:47:20.167 "dhchap_digests": [ 00:47:20.167 "sha256", 00:47:20.167 "sha384", 00:47:20.167 "sha512" 00:47:20.167 ], 00:47:20.167 "dhchap_dhgroups": [ 00:47:20.167 "null", 00:47:20.167 "ffdhe2048", 00:47:20.167 "ffdhe3072", 00:47:20.167 "ffdhe4096", 00:47:20.167 "ffdhe6144", 00:47:20.167 "ffdhe8192" 00:47:20.167 ] 00:47:20.167 } 00:47:20.167 }, 00:47:20.167 { 00:47:20.167 "method": "bdev_nvme_attach_controller", 00:47:20.167 "params": { 00:47:20.167 "name": "nvme0", 00:47:20.167 "trtype": "TCP", 00:47:20.167 "adrfam": "IPv4", 00:47:20.167 "traddr": "127.0.0.1", 00:47:20.167 "trsvcid": "4420", 00:47:20.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:20.167 "prchk_reftag": false, 00:47:20.167 "prchk_guard": false, 00:47:20.167 "ctrlr_loss_timeout_sec": 0, 00:47:20.167 "reconnect_delay_sec": 0, 00:47:20.167 "fast_io_fail_timeout_sec": 0, 00:47:20.167 "psk": "key0", 00:47:20.167 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:20.167 "hdgst": false, 00:47:20.167 "ddgst": false, 00:47:20.167 "multipath": "multipath" 00:47:20.167 } 00:47:20.167 }, 00:47:20.167 { 00:47:20.167 "method": "bdev_nvme_set_hotplug", 00:47:20.167 "params": { 00:47:20.167 "period_us": 100000, 00:47:20.167 "enable": false 00:47:20.167 } 00:47:20.167 }, 00:47:20.167 { 00:47:20.167 "method": "bdev_wait_for_examine" 00:47:20.167 } 00:47:20.167 ] 00:47:20.167 }, 00:47:20.167 { 00:47:20.167 "subsystem": "nbd", 00:47:20.167 "config": [] 00:47:20.167 } 00:47:20.167 ] 00:47:20.167 }' 00:47:20.167 11:59:19 keyring_file -- keyring/file.sh@115 -- # killprocess 2937372 00:47:20.167 11:59:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2937372 ']' 00:47:20.167 11:59:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2937372 00:47:20.167 11:59:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:20.167 11:59:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:20.167 11:59:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2937372 00:47:20.167 11:59:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:20.167 11:59:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:20.167 11:59:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2937372' 00:47:20.167 killing process with pid 2937372 00:47:20.167 11:59:19 keyring_file -- common/autotest_common.sh@973 -- # kill 2937372 00:47:20.167 Received shutdown signal, test time was about 1.000000 seconds 00:47:20.167 00:47:20.167 Latency(us) 00:47:20.167 [2024-12-07T10:59:19.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:20.167 [2024-12-07T10:59:19.521Z] =================================================================================================================== 00:47:20.167 [2024-12-07T10:59:19.521Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:20.167 11:59:19 keyring_file -- common/autotest_common.sh@978 -- # wait 2937372 00:47:20.741 11:59:19 keyring_file -- keyring/file.sh@118 -- # bperfpid=2939185 00:47:20.742 11:59:19 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2939185 /var/tmp/bperf.sock 00:47:20.742 11:59:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2939185 ']' 00:47:20.742 11:59:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:20.742 11:59:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:20.742 11:59:19 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:47:20.742 11:59:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:20.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:20.742 11:59:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:20.742 11:59:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:20.742 11:59:19 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:47:20.742 "subsystems": [ 00:47:20.742 { 00:47:20.742 "subsystem": "keyring", 00:47:20.742 "config": [ 00:47:20.742 { 00:47:20.742 "method": "keyring_file_add_key", 00:47:20.742 "params": { 00:47:20.742 "name": "key0", 00:47:20.742 "path": "/tmp/tmp.taqt0k9nSf" 00:47:20.742 } 00:47:20.742 }, 00:47:20.742 { 00:47:20.742 "method": "keyring_file_add_key", 00:47:20.742 "params": { 00:47:20.742 "name": "key1", 00:47:20.742 "path": "/tmp/tmp.yt47e8rGBR" 00:47:20.742 } 00:47:20.742 } 00:47:20.742 ] 00:47:20.742 }, 00:47:20.742 { 00:47:20.742 "subsystem": "iobuf", 00:47:20.742 "config": [ 00:47:20.742 { 00:47:20.742 "method": "iobuf_set_options", 00:47:20.742 "params": { 00:47:20.742 "small_pool_count": 8192, 00:47:20.742 "large_pool_count": 1024, 00:47:20.742 "small_bufsize": 8192, 00:47:20.742 "large_bufsize": 135168, 00:47:20.742 "enable_numa": false 00:47:20.742 } 00:47:20.742 } 00:47:20.742 ] 00:47:20.742 }, 00:47:20.742 { 00:47:20.742 "subsystem": "sock", 00:47:20.742 "config": [ 00:47:20.742 { 00:47:20.742 "method": "sock_set_default_impl", 00:47:20.742 "params": { 00:47:20.742 "impl_name": "posix" 00:47:20.742 } 00:47:20.742 }, 00:47:20.742 { 00:47:20.742 "method": "sock_impl_set_options", 00:47:20.742 "params": { 00:47:20.742 "impl_name": "ssl", 00:47:20.742 "recv_buf_size": 4096, 00:47:20.742 "send_buf_size": 4096, 00:47:20.742 "enable_recv_pipe": true, 00:47:20.742 "enable_quickack": false, 00:47:20.742 "enable_placement_id": 0, 00:47:20.742 "enable_zerocopy_send_server": true, 00:47:20.742 "enable_zerocopy_send_client": false, 00:47:20.742 "zerocopy_threshold": 0, 00:47:20.742 "tls_version": 0, 00:47:20.742 "enable_ktls": false 00:47:20.742 } 00:47:20.742 }, 00:47:20.742 { 00:47:20.742 "method": "sock_impl_set_options", 00:47:20.742 "params": { 00:47:20.742 "impl_name": "posix", 00:47:20.742 "recv_buf_size": 2097152, 00:47:20.742 "send_buf_size": 2097152, 00:47:20.742 "enable_recv_pipe": true, 00:47:20.742 "enable_quickack": false, 00:47:20.742 "enable_placement_id": 0, 00:47:20.742 "enable_zerocopy_send_server": true, 00:47:20.742 "enable_zerocopy_send_client": false, 00:47:20.742 "zerocopy_threshold": 0, 00:47:20.742 "tls_version": 0, 00:47:20.742 "enable_ktls": false 00:47:20.742 } 00:47:20.742 } 00:47:20.742 ] 00:47:20.742 }, 00:47:20.742 { 00:47:20.742 "subsystem": "vmd", 00:47:20.742 "config": [] 00:47:20.742 }, 00:47:20.742 { 00:47:20.742 "subsystem": "accel", 00:47:20.742 "config": [ 00:47:20.742 { 00:47:20.742 "method": "accel_set_options", 00:47:20.742 "params": { 00:47:20.742 "small_cache_size": 128, 00:47:20.742 "large_cache_size": 16, 00:47:20.742 "task_count": 2048, 00:47:20.742 "sequence_count": 2048, 00:47:20.742 "buf_count": 2048 00:47:20.742 } 00:47:20.742 } 00:47:20.742 ] 00:47:20.742 }, 00:47:20.742 { 00:47:20.742 "subsystem": "bdev", 00:47:20.742 "config": [ 00:47:20.742 { 00:47:20.742 "method": "bdev_set_options", 00:47:20.742 "params": { 00:47:20.742 "bdev_io_pool_size": 65535, 00:47:20.742 "bdev_io_cache_size": 256, 00:47:20.742 "bdev_auto_examine": true, 00:47:20.742 "iobuf_small_cache_size": 128, 00:47:20.742 "iobuf_large_cache_size": 16 00:47:20.742 } 00:47:20.742 }, 00:47:20.742 { 00:47:20.742 "method": "bdev_raid_set_options", 00:47:20.742 "params": { 00:47:20.742 "process_window_size_kb": 1024, 00:47:20.742 "process_max_bandwidth_mb_sec": 0 00:47:20.742 } 00:47:20.742 }, 00:47:20.742 { 00:47:20.742 "method": "bdev_iscsi_set_options", 00:47:20.742 "params": { 00:47:20.742 "timeout_sec": 30 00:47:20.742 } 00:47:20.742 }, 00:47:20.742 { 00:47:20.742 "method": "bdev_nvme_set_options", 00:47:20.742 "params": { 00:47:20.742 "action_on_timeout": "none", 00:47:20.742 "timeout_us": 0, 00:47:20.742 "timeout_admin_us": 0, 00:47:20.742 "keep_alive_timeout_ms": 10000, 00:47:20.742 "arbitration_burst": 0, 00:47:20.742 "low_priority_weight": 0, 00:47:20.742 "medium_priority_weight": 0, 00:47:20.742 "high_priority_weight": 0, 00:47:20.742 "nvme_adminq_poll_period_us": 10000, 00:47:20.742 "nvme_ioq_poll_period_us": 0, 00:47:20.742 "io_queue_requests": 512, 00:47:20.742 "delay_cmd_submit": true, 00:47:20.742 "transport_retry_count": 4, 00:47:20.742 "bdev_retry_count": 3, 00:47:20.742 "transport_ack_timeout": 0, 00:47:20.742 "ctrlr_loss_timeout_sec": 0, 00:47:20.742 "reconnect_delay_sec": 0, 00:47:20.742 "fast_io_fail_timeout_sec": 0, 00:47:20.742 "disable_auto_failback": false, 00:47:20.742 "generate_uuids": false, 00:47:20.742 "transport_tos": 0, 00:47:20.742 "nvme_error_stat": false, 00:47:20.742 "rdma_srq_size": 0, 00:47:20.742 "io_path_stat": false, 00:47:20.742 "allow_accel_sequence": false, 00:47:20.742 "rdma_max_cq_size": 0, 00:47:20.742 "rdma_cm_event_timeout_ms": 0, 00:47:20.742 "dhchap_digests": [ 00:47:20.742 "sha256", 00:47:20.742 "sha384", 00:47:20.742 "sha512" 00:47:20.742 ], 00:47:20.742 "dhchap_dhgroups": [ 00:47:20.742 "null", 00:47:20.742 "ffdhe2048", 00:47:20.742 "ffdhe3072", 00:47:20.742 "ffdhe4096", 00:47:20.742 "ffdhe6144", 00:47:20.742 "ffdhe8192" 00:47:20.742 ] 00:47:20.742 } 00:47:20.742 }, 00:47:20.742 { 00:47:20.742 "method": "bdev_nvme_attach_controller", 00:47:20.742 "params": { 00:47:20.742 "name": "nvme0", 00:47:20.742 "trtype": "TCP", 00:47:20.742 "adrfam": "IPv4", 00:47:20.743 "traddr": "127.0.0.1", 00:47:20.743 "trsvcid": "4420", 00:47:20.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:20.743 "prchk_reftag": false, 00:47:20.743 "prchk_guard": false, 00:47:20.743 "ctrlr_loss_timeout_sec": 0, 00:47:20.743 "reconnect_delay_sec": 0, 00:47:20.743 "fast_io_fail_timeout_sec": 0, 00:47:20.743 "psk": "key0", 00:47:20.743 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:20.743 "hdgst": false, 00:47:20.743 "ddgst": false, 00:47:20.743 "multipath": "multipath" 00:47:20.743 } 00:47:20.743 }, 00:47:20.743 { 00:47:20.743 "method": "bdev_nvme_set_hotplug", 00:47:20.743 "params": { 00:47:20.743 "period_us": 100000, 00:47:20.743 "enable": false 00:47:20.743 } 00:47:20.743 }, 00:47:20.743 { 00:47:20.743 "method": "bdev_wait_for_examine" 00:47:20.743 } 00:47:20.743 ] 00:47:20.743 }, 00:47:20.743 { 00:47:20.743 "subsystem": "nbd", 00:47:20.743 "config": [] 00:47:20.743 } 00:47:20.743 ] 00:47:20.743 }' 00:47:20.743 [2024-12-07 11:59:19.917072] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:47:20.743 [2024-12-07 11:59:19.917176] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939185 ] 00:47:20.743 [2024-12-07 11:59:20.049291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:21.004 [2024-12-07 11:59:20.124268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:21.265 [2024-12-07 11:59:20.394798] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:21.526 11:59:20 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:21.526 11:59:20 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:47:21.526 11:59:20 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:47:21.526 11:59:20 keyring_file -- keyring/file.sh@121 -- # jq length 00:47:21.526 11:59:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:21.526 11:59:20 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:47:21.526 11:59:20 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:47:21.526 11:59:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:21.526 11:59:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:21.526 11:59:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:21.526 11:59:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:21.526 11:59:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:21.788 11:59:21 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:47:21.788 11:59:21 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:47:21.788 11:59:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:21.788 11:59:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:21.788 11:59:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:21.788 11:59:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:21.788 11:59:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:22.048 11:59:21 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:47:22.048 11:59:21 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:47:22.048 11:59:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:47:22.048 11:59:21 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:47:22.048 11:59:21 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:47:22.048 11:59:21 keyring_file -- keyring/file.sh@1 -- # cleanup 00:47:22.048 11:59:21 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.taqt0k9nSf /tmp/tmp.yt47e8rGBR 00:47:22.048 11:59:21 keyring_file -- keyring/file.sh@20 -- # killprocess 2939185 00:47:22.048 11:59:21 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2939185 ']' 00:47:22.048 11:59:21 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2939185 00:47:22.048 11:59:21 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:22.048 11:59:21 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:22.048 11:59:21 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2939185 00:47:22.310 11:59:21 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:22.310 11:59:21 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:22.310 11:59:21 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2939185' 00:47:22.310 killing process with pid 2939185 00:47:22.310 11:59:21 keyring_file -- common/autotest_common.sh@973 -- # kill 2939185 00:47:22.310 Received shutdown signal, test time was about 1.000000 seconds 00:47:22.310 00:47:22.310 Latency(us) 00:47:22.310 [2024-12-07T10:59:21.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:22.310 [2024-12-07T10:59:21.664Z] =================================================================================================================== 00:47:22.310 [2024-12-07T10:59:21.664Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:47:22.310 11:59:21 keyring_file -- common/autotest_common.sh@978 -- # wait 2939185 00:47:22.571 11:59:21 keyring_file -- keyring/file.sh@21 -- # killprocess 2937201 00:47:22.571 11:59:21 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2937201 ']' 00:47:22.571 11:59:21 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2937201 00:47:22.571 11:59:21 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:22.571 11:59:21 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:22.571 11:59:21 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2937201 00:47:22.831 11:59:21 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:22.831 11:59:21 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:22.831 11:59:21 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2937201' 00:47:22.831 killing process with pid 2937201 00:47:22.831 11:59:21 keyring_file -- common/autotest_common.sh@973 -- # kill 2937201 00:47:22.831 11:59:21 keyring_file -- common/autotest_common.sh@978 -- # wait 2937201 00:47:24.748 00:47:24.748 real 0m13.966s 00:47:24.748 user 0m30.970s 00:47:24.748 sys 0m2.841s 00:47:24.748 11:59:23 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:24.748 11:59:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:24.748 ************************************ 00:47:24.748 END TEST keyring_file 00:47:24.748 ************************************ 00:47:24.748 11:59:23 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:47:24.748 11:59:23 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:24.748 11:59:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:24.748 11:59:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:24.748 11:59:23 -- common/autotest_common.sh@10 -- # set +x 00:47:24.748 ************************************ 00:47:24.748 START TEST keyring_linux 00:47:24.748 ************************************ 00:47:24.748 11:59:23 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:24.748 Joined session keyring: 119112021 00:47:24.748 * Looking for test storage... 00:47:24.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:24.748 11:59:23 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:24.748 11:59:23 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:47:24.748 11:59:23 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:24.748 11:59:23 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@345 -- # : 1 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:47:24.748 11:59:23 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:47:24.749 11:59:23 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:24.749 11:59:23 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:47:24.749 11:59:23 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:47:24.749 11:59:23 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:24.749 11:59:23 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:24.749 11:59:23 keyring_linux -- scripts/common.sh@368 -- # return 0 00:47:24.749 11:59:23 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:24.749 11:59:23 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:24.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:24.749 --rc genhtml_branch_coverage=1 00:47:24.749 --rc genhtml_function_coverage=1 00:47:24.749 --rc genhtml_legend=1 00:47:24.749 --rc geninfo_all_blocks=1 00:47:24.749 --rc geninfo_unexecuted_blocks=1 00:47:24.749 00:47:24.749 ' 00:47:24.749 11:59:23 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:24.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:24.749 --rc genhtml_branch_coverage=1 00:47:24.749 --rc genhtml_function_coverage=1 00:47:24.749 --rc genhtml_legend=1 00:47:24.749 --rc geninfo_all_blocks=1 00:47:24.749 --rc geninfo_unexecuted_blocks=1 00:47:24.749 00:47:24.749 ' 00:47:24.749 11:59:23 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:24.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:24.749 --rc genhtml_branch_coverage=1 00:47:24.749 --rc genhtml_function_coverage=1 00:47:24.749 --rc genhtml_legend=1 00:47:24.749 --rc geninfo_all_blocks=1 00:47:24.749 --rc geninfo_unexecuted_blocks=1 00:47:24.749 00:47:24.749 ' 00:47:24.749 11:59:23 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:24.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:24.749 --rc genhtml_branch_coverage=1 00:47:24.749 --rc genhtml_function_coverage=1 00:47:24.749 --rc genhtml_legend=1 00:47:24.749 --rc geninfo_all_blocks=1 00:47:24.749 --rc geninfo_unexecuted_blocks=1 00:47:24.749 00:47:24.749 ' 00:47:24.749 11:59:23 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:24.749 11:59:23 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:47:24.749 11:59:23 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:24.749 11:59:23 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:24.749 11:59:23 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:24.749 11:59:23 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:24.749 11:59:23 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:24.749 11:59:23 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:24.749 11:59:23 keyring_linux -- paths/export.sh@5 -- # export PATH 00:47:24.749 11:59:23 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:24.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:24.749 11:59:23 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:24.749 11:59:23 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:24.749 11:59:23 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:47:24.749 11:59:23 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:47:24.749 11:59:23 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:47:24.749 11:59:23 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:47:24.749 /tmp/:spdk-test:key0 00:47:24.749 11:59:23 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:47:24.749 11:59:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:47:24.749 11:59:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:47:24.749 /tmp/:spdk-test:key1 00:47:24.749 11:59:23 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:24.749 11:59:23 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2939959 00:47:24.749 11:59:23 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2939959 00:47:24.749 11:59:23 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2939959 ']' 00:47:24.750 11:59:23 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:24.750 11:59:23 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:24.750 11:59:23 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:24.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:24.750 11:59:23 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:24.750 11:59:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:24.750 [2024-12-07 11:59:24.045314] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:47:24.750 [2024-12-07 11:59:24.045434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939959 ] 00:47:25.010 [2024-12-07 11:59:24.176568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:25.010 [2024-12-07 11:59:24.274135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:25.578 11:59:24 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:25.578 11:59:24 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:47:25.578 11:59:24 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:47:25.578 11:59:24 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:25.579 11:59:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:25.579 [2024-12-07 11:59:24.927849] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:25.838 null0 00:47:25.838 [2024-12-07 11:59:24.959881] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:25.838 [2024-12-07 11:59:24.960362] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:25.838 11:59:24 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:25.838 11:59:24 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:47:25.838 408009083 00:47:25.838 11:59:24 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:47:25.838 185515349 00:47:25.838 11:59:24 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2940273 00:47:25.838 11:59:24 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2940273 /var/tmp/bperf.sock 00:47:25.838 11:59:24 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:47:25.838 11:59:24 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2940273 ']' 00:47:25.838 11:59:24 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:25.838 11:59:24 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:25.838 11:59:24 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:25.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:25.838 11:59:24 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:25.838 11:59:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:25.838 [2024-12-07 11:59:25.064253] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:47:25.838 [2024-12-07 11:59:25.064361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940273 ] 00:47:26.097 [2024-12-07 11:59:25.196733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:26.097 [2024-12-07 11:59:25.271971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:26.667 11:59:25 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:26.667 11:59:25 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:47:26.667 11:59:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:47:26.667 11:59:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:47:26.667 11:59:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:47:26.667 11:59:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:27.238 11:59:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:27.238 11:59:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:27.238 [2024-12-07 11:59:26.471744] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:27.238 nvme0n1 00:47:27.238 11:59:26 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:47:27.238 11:59:26 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:47:27.238 11:59:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:27.238 11:59:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:27.238 11:59:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:27.238 11:59:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:27.499 11:59:26 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:47:27.499 11:59:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:27.499 11:59:26 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:47:27.499 11:59:26 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:47:27.499 11:59:26 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:27.499 11:59:26 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:47:27.499 11:59:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:27.758 11:59:26 keyring_linux -- keyring/linux.sh@25 -- # sn=408009083 00:47:27.758 11:59:26 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:47:27.758 11:59:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:27.758 11:59:26 keyring_linux -- keyring/linux.sh@26 -- # [[ 408009083 == \4\0\8\0\0\9\0\8\3 ]] 00:47:27.758 11:59:26 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 408009083 00:47:27.758 11:59:26 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:47:27.758 11:59:26 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:27.758 Running I/O for 1 seconds... 00:47:28.700 13666.00 IOPS, 53.38 MiB/s 00:47:28.700 Latency(us) 00:47:28.700 [2024-12-07T10:59:28.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:28.700 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:47:28.700 nvme0n1 : 1.01 13666.51 53.38 0.00 0.00 9317.65 8246.61 18896.21 00:47:28.700 [2024-12-07T10:59:28.054Z] =================================================================================================================== 00:47:28.700 [2024-12-07T10:59:28.054Z] Total : 13666.51 53.38 0.00 0.00 9317.65 8246.61 18896.21 00:47:28.700 { 00:47:28.700 "results": [ 00:47:28.700 { 00:47:28.700 "job": "nvme0n1", 00:47:28.700 "core_mask": "0x2", 00:47:28.700 "workload": "randread", 00:47:28.700 "status": "finished", 00:47:28.700 "queue_depth": 128, 00:47:28.700 "io_size": 4096, 00:47:28.700 "runtime": 1.009329, 00:47:28.700 "iops": 13666.505173238855, 00:47:28.700 "mibps": 53.384785832964276, 00:47:28.700 "io_failed": 0, 00:47:28.700 "io_timeout": 0, 00:47:28.700 "avg_latency_us": 9317.648740031898, 00:47:28.700 "min_latency_us": 8246.613333333333, 00:47:28.700 "max_latency_us": 18896.213333333333 00:47:28.700 } 00:47:28.700 ], 00:47:28.700 "core_count": 1 00:47:28.700 } 00:47:28.700 11:59:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:28.700 11:59:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:28.960 11:59:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:47:28.960 11:59:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:47:28.960 11:59:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:28.960 11:59:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:28.960 11:59:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:28.960 11:59:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:29.221 11:59:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:47:29.221 11:59:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:29.221 11:59:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:47:29.221 11:59:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:29.221 11:59:28 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:47:29.221 11:59:28 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:29.221 11:59:28 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:29.221 11:59:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:29.221 11:59:28 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:29.221 11:59:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:29.221 11:59:28 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:29.221 11:59:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:29.221 [2024-12-07 11:59:28.563193] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:29.221 [2024-12-07 11:59:28.563351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (107): Transport endpoint is not connected 00:47:29.221 [2024-12-07 11:59:28.564332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:47:29.221 [2024-12-07 11:59:28.565330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:47:29.221 [2024-12-07 11:59:28.565353] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:29.221 [2024-12-07 11:59:28.565364] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:29.221 [2024-12-07 11:59:28.565373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:47:29.221 request: 00:47:29.221 { 00:47:29.221 "name": "nvme0", 00:47:29.221 "trtype": "tcp", 00:47:29.221 "traddr": "127.0.0.1", 00:47:29.221 "adrfam": "ipv4", 00:47:29.221 "trsvcid": "4420", 00:47:29.221 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:29.221 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:29.221 "prchk_reftag": false, 00:47:29.221 "prchk_guard": false, 00:47:29.221 "hdgst": false, 00:47:29.221 "ddgst": false, 00:47:29.221 "psk": ":spdk-test:key1", 00:47:29.221 "allow_unrecognized_csi": false, 00:47:29.221 "method": "bdev_nvme_attach_controller", 00:47:29.221 "req_id": 1 00:47:29.221 } 00:47:29.221 Got JSON-RPC error response 00:47:29.221 response: 00:47:29.221 { 00:47:29.221 "code": -5, 00:47:29.221 "message": "Input/output error" 00:47:29.221 } 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@33 -- # sn=408009083 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 408009083 00:47:29.481 1 links removed 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@33 -- # sn=185515349 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 185515349 00:47:29.481 1 links removed 00:47:29.481 11:59:28 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2940273 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2940273 ']' 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2940273 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2940273 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2940273' 00:47:29.481 killing process with pid 2940273 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@973 -- # kill 2940273 00:47:29.481 Received shutdown signal, test time was about 1.000000 seconds 00:47:29.481 00:47:29.481 Latency(us) 00:47:29.481 [2024-12-07T10:59:28.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:29.481 [2024-12-07T10:59:28.835Z] =================================================================================================================== 00:47:29.481 [2024-12-07T10:59:28.835Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:29.481 11:59:28 keyring_linux -- common/autotest_common.sh@978 -- # wait 2940273 00:47:30.048 11:59:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2939959 00:47:30.048 11:59:29 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2939959 ']' 00:47:30.048 11:59:29 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2939959 00:47:30.048 11:59:29 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:47:30.048 11:59:29 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:30.048 11:59:29 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2939959 00:47:30.048 11:59:29 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:30.048 11:59:29 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:30.048 11:59:29 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2939959' 00:47:30.048 killing process with pid 2939959 00:47:30.048 11:59:29 keyring_linux -- common/autotest_common.sh@973 -- # kill 2939959 00:47:30.048 11:59:29 keyring_linux -- common/autotest_common.sh@978 -- # wait 2939959 00:47:31.956 00:47:31.956 real 0m7.144s 00:47:31.956 user 0m11.862s 00:47:31.956 sys 0m1.559s 00:47:31.956 11:59:30 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:31.956 11:59:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:31.956 ************************************ 00:47:31.956 END TEST keyring_linux 00:47:31.956 ************************************ 00:47:31.956 11:59:30 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:47:31.956 11:59:30 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:47:31.956 11:59:30 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:47:31.956 11:59:30 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:47:31.956 11:59:30 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:47:31.956 11:59:30 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:47:31.956 11:59:30 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:47:31.956 11:59:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:47:31.956 11:59:30 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:47:31.956 11:59:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:47:31.956 11:59:30 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:47:31.956 11:59:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:47:31.956 11:59:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:47:31.956 11:59:30 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:47:31.956 11:59:30 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:47:31.956 11:59:30 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:47:31.956 11:59:30 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:47:31.956 11:59:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:31.956 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:47:31.956 11:59:30 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:47:31.956 11:59:30 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:47:31.956 11:59:30 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:47:31.956 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:47:40.091 INFO: APP EXITING 00:47:40.091 INFO: killing all VMs 00:47:40.091 INFO: killing vhost app 00:47:40.091 INFO: EXIT DONE 00:47:42.810 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:65:00.0 (144d a80a): Already using the nvme driver 00:47:42.810 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:47:42.810 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:47:46.114 Cleaning 00:47:46.114 Removing: /var/run/dpdk/spdk0/config 00:47:46.114 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:46.114 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:46.114 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:46.114 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:46.114 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:46.114 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:46.114 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:46.114 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:46.114 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:46.373 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:46.373 Removing: /var/run/dpdk/spdk1/config 00:47:46.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:46.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:46.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:46.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:46.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:46.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:46.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:46.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:46.373 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:46.373 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:46.373 Removing: /var/run/dpdk/spdk2/config 00:47:46.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:46.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:46.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:46.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:46.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:46.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:46.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:46.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:46.373 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:46.373 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:46.373 Removing: /var/run/dpdk/spdk3/config 00:47:46.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:46.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:46.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:46.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:46.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:46.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:46.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:46.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:46.373 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:46.373 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:46.373 Removing: /var/run/dpdk/spdk4/config 00:47:46.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:46.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:46.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:46.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:46.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:46.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:46.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:46.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:46.373 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:46.373 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:46.373 Removing: /dev/shm/bdev_svc_trace.1 00:47:46.373 Removing: /dev/shm/nvmf_trace.0 00:47:46.373 Removing: /dev/shm/spdk_tgt_trace.pid2243867 00:47:46.373 Removing: /var/run/dpdk/spdk0 00:47:46.373 Removing: /var/run/dpdk/spdk1 00:47:46.373 Removing: /var/run/dpdk/spdk2 00:47:46.373 Removing: /var/run/dpdk/spdk3 00:47:46.373 Removing: /var/run/dpdk/spdk4 00:47:46.373 Removing: /var/run/dpdk/spdk_pid2241365 00:47:46.373 Removing: /var/run/dpdk/spdk_pid2243867 00:47:46.373 Removing: /var/run/dpdk/spdk_pid2245056 00:47:46.373 Removing: /var/run/dpdk/spdk_pid2246431 00:47:46.373 Removing: /var/run/dpdk/spdk_pid2246844 00:47:46.373 Removing: /var/run/dpdk/spdk_pid2248351 00:47:46.373 Removing: /var/run/dpdk/spdk_pid2248533 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2249323 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2250476 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2251428 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2252199 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2252815 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2253491 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2254229 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2254591 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2254943 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2255338 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2256736 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2260342 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2261052 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2261749 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2261877 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2263468 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2263485 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2265118 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2265210 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2265906 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2266064 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2266618 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2266784 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2267863 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2268128 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2268552 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2273738 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2279735 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2291748 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2292610 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2297964 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2298439 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2303910 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2311083 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2314478 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2327479 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2339298 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2341558 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2342904 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2364702 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2369908 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2471562 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2478739 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2485981 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2497573 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2533867 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2539557 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2541560 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2543898 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2544250 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2544595 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2544937 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2545724 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2548011 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2549544 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2550407 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2553185 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2554221 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2555240 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2560502 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2568005 00:47:46.633 Removing: /var/run/dpdk/spdk_pid2568007 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2568009 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2572801 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2577701 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2583537 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2628721 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2633838 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2641134 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2643058 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2645211 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2647339 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2653674 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2659548 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2664658 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2674227 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2674384 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2679684 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2680017 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2680350 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2680777 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2680973 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2682386 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2684317 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2686164 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2688063 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2690060 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2692057 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2699572 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2700426 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2701956 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2703564 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2710465 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2713797 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2720431 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2727235 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2737535 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2746162 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2746237 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2770285 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2771282 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2771971 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2772832 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2774051 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2774763 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2775743 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2776430 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2781873 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2782220 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2789652 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2790015 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2796567 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2802075 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2813997 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2814737 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2820071 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2820446 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2825620 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2832647 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2835757 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2848392 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2859766 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2861780 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2863005 00:47:46.893 Removing: /var/run/dpdk/spdk_pid2883253 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2888098 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2891562 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2899076 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2899082 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2905417 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2908366 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2910901 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2912420 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2914972 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2916491 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2926897 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2927560 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2928180 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2931409 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2932067 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2932534 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2937201 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2937372 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2939185 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2939959 00:47:47.154 Removing: /var/run/dpdk/spdk_pid2940273 00:47:47.154 Clean 00:47:47.154 11:59:46 -- common/autotest_common.sh@1453 -- # return 0 00:47:47.154 11:59:46 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:47:47.154 11:59:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:47.154 11:59:46 -- common/autotest_common.sh@10 -- # set +x 00:47:47.154 11:59:46 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:47:47.154 11:59:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:47.154 11:59:46 -- common/autotest_common.sh@10 -- # set +x 00:47:47.154 11:59:46 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:47.154 11:59:46 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:47.154 11:59:46 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:47.154 11:59:46 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:47:47.154 11:59:46 -- spdk/autotest.sh@398 -- # hostname 00:47:47.154 11:59:46 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:47.415 geninfo: WARNING: invalid characters removed from testname! 00:48:09.374 12:00:08 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:11.920 12:00:11 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:14.492 12:00:13 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:15.873 12:00:14 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:17.252 12:00:16 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:18.632 12:00:17 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:20.547 12:00:19 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:48:20.547 12:00:19 -- spdk/autorun.sh@1 -- $ timing_finish 00:48:20.547 12:00:19 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:48:20.547 12:00:19 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:48:20.547 12:00:19 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:48:20.547 12:00:19 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:48:20.547 + [[ -n 2155929 ]] 00:48:20.547 + sudo kill 2155929 00:48:20.559 [Pipeline] } 00:48:20.581 [Pipeline] // stage 00:48:20.589 [Pipeline] } 00:48:20.607 [Pipeline] // timeout 00:48:20.612 [Pipeline] } 00:48:20.626 [Pipeline] // catchError 00:48:20.631 [Pipeline] } 00:48:20.648 [Pipeline] // wrap 00:48:20.654 [Pipeline] } 00:48:20.674 [Pipeline] // catchError 00:48:20.684 [Pipeline] stage 00:48:20.686 [Pipeline] { (Epilogue) 00:48:20.700 [Pipeline] catchError 00:48:20.701 [Pipeline] { 00:48:20.714 [Pipeline] echo 00:48:20.716 Cleanup processes 00:48:20.722 [Pipeline] sh 00:48:21.011 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:21.011 2955217 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:21.026 [Pipeline] sh 00:48:21.316 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:21.316 ++ grep -v 'sudo pgrep' 00:48:21.316 ++ awk '{print $1}' 00:48:21.316 + sudo kill -9 00:48:21.316 + true 00:48:21.331 [Pipeline] sh 00:48:21.624 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:33.857 [Pipeline] sh 00:48:34.145 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:34.145 Artifacts sizes are good 00:48:34.161 [Pipeline] archiveArtifacts 00:48:34.170 Archiving artifacts 00:48:34.338 [Pipeline] sh 00:48:34.624 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:48:34.641 [Pipeline] cleanWs 00:48:34.652 [WS-CLEANUP] Deleting project workspace... 00:48:34.652 [WS-CLEANUP] Deferred wipeout is used... 00:48:34.659 [WS-CLEANUP] done 00:48:34.660 [Pipeline] } 00:48:34.679 [Pipeline] // catchError 00:48:34.691 [Pipeline] sh 00:48:35.000 + logger -p user.info -t JENKINS-CI 00:48:35.042 [Pipeline] } 00:48:35.054 [Pipeline] // stage 00:48:35.058 [Pipeline] } 00:48:35.069 [Pipeline] // node 00:48:35.074 [Pipeline] End of Pipeline 00:48:35.103 Finished: SUCCESS